Taggoogle

Facebook, Twitter, Google are a new type of media platform, and new rules apply

F

When Congress questioned representatives of Facebook, Google and Twitter, it became official: We need to finally find an answer to a debate that’s been bubbling for months (if not years) about the role of the tech companies—Google, Apple, Facebook, Amazon, Microsoft, or GAFAM—and their platforms.

The question is summed up by Ted Cruz’s line of inquiry (and here’s a person I never expected to quote) in the Congressional hearing: “Do you consider your sites to be neutral public fora?” (Some others echoed versions of this question.)

Platform or media?

Simply put, the question boils down to this: Are GAFAM tech companies or media companies? Are they held to standards (and regulation) of “neutral platform” or “content creator”? Are they dumb infrastructure or pillars of democracy?

These are big questions to ask, and I don’t envy the companies for their position in this one. As a neutral platform they get a large degree of freedom, but have to take responsibility for the hate speech and abuse on their platform. As a media company they get to shape the conversation more actively, but can’t claim the extreme point of view of free speech they like to take. You can’t both be neutral and “bring humanity together” as Mark Zuckerberg intends. As Ben Thompson points out on Stratechery (potentially paywalled), neutrality might be the “easier” option:

the “safest” position for the company to take would be the sort of neutrality demanded by Cruz — a refusal to do any sort of explicit policing of content, no matter how objectionable. That, though, was unacceptable to the company’s employee base specifically, and Silicon Valley broadly

I agree this would be easier. (I’m not so sure that the employee preference is the driving force, but that’s another debate and it certainly plays a role.) Also, let’s not forget that each of these companies plays a global game, and wherever they operate they have to meet legal requirements. Where are they willing to draw the line? Google famously didn’t enter the Chinese market a few years ago, presumably because they didn’t want to meet the government’s censorship requirements. This was a principled move, and I would expect not an easy one for a big market. But where do you draw the line? US rules on nudity? German rules on censoring Nazi glorification and hate speech? Chinese rules on censoring pro-democracy reporting or on government surveillance?

For GAFAM, the position has traditionally been clear cut and quite straightforward, which we can still (kind of, sort of) see in the Congressional hearing:

“We don’t think of it in the terms of ‘neutral,'” [Facebook General Counsel Colin] Stretch continued, pointing out that Facebook tries to give users a personalized feed of content. “But we do think of ourselves as — again, within the boundaries that I described — open to all ideas without regard to viewpoint or ideology.” (Source: Recode)

Once more:

[Senator John] Kennedy also asked Richard Salgado, Google’s director of law enforcement and information security, whether the company is a “newspaper” or a neutral tech platform. Salgado replied that Google is a tech company, to which Kennedy quipped, “that’s what I thought you’d say.” (Source: Business Insider)

Now that’s interesting, because while they claim to be “neutral” free speech companies, Facebook and the others have of course been hugely filtering content by various means (from their Terms of Service to community guidelines), and shaping the attention flow (who sees what and when) forever.

This aspect isn’t discussed much, but worth noting nonetheless: How Facebook and other tech firms deal with content has been based to a relatively large degree by United States legal and cultural standards. Which makes sense, given that they’re US companies, but doesn’t make a lot of sense given they operate globally. To name just two examples from above that highlight how legal and cultural standards differ from country to country, take pictures of nudity (largely not OK in the US, largely OK in Germany) versus positively referencing the Third Reich (largely illegal in Germany, largely least legal in the US).

Big tech platforms are a new type of media platform

Here’s the thing: These big tech platforms aren’t neutral platforms for debate, nor are they traditional media platforms. They are neither neither dumb tech (they actively choose and frame and shape content & traffic) nor traditional media companies that (at least notionally) primarily engage in content creation. These big tech platforms are a new type of media platform, and new rules apply. Hence, they require new ways of thinking and analysis, as well as new approaches to regulation.

(As an personal, rambling aside: Given we’ve been discussing the transformational effects of digital media and especially social media for far over a decade now, how do we still even have to have this debate in 2017? I genuinely thought that we had at least sorted out our basic understanding of social media as a new hybrid by 2010. Sigh.)

We might be able to apply existing regulatory—and equally important: analytical—frameworks. Or maybe we can find a way to apply existing ones in new ways. But, and I say this expressly without judgement, these are platforms that operate at a scale and dynamism we haven’t seen before. They are of a new quality, they display qualities and combinations of qualities and characteristics we don’t have much experience with. Yet, on a societal level we’ve been viewing them through the old lenses of either media (“a newspaper”, “broadcast”) or neutral platforms (“tubes”, “electricity”). And it hasn’t worked yet, and will continue not to work, because it makes little sense.

That’s why it’s important to take a breath and figure out how to best understand implications, and shape the tech, the organizations, the frameworks within which they operate.

It might turn out, and I’d say it’s likely, that they operate within some frameworks but outside others, and in those cases we need to adjust the frameworks, the organizations, or both. To align the analytical and regulatory frameworks with realities, or vice versa.

This isn’t an us versus them situation like many parties are insinuating: It’s not politics versus tech as actors on both the governmental and the tech side sometimes seem to think. It’s not tech vs civil society as some activists claim. It’s certainly not Silicon Valley against the rest of the world, even though a little more cultural sensitivity might do SV firms a world of good. This is a question of how we want to live our lives, govern our lives, as they are impacted by the flow of information.

It’s going to be tricky to figure this out as there are many nation states involved, and some supra-national actors, and large global commercial actors and many other, smaller but equally important players. It’s a messy mix of stakeholders and interests.

But one thing I can promise: The solution won’t be just technical, not just legal, nor cultural. It’ll be a slow and messy process that involves all three fields, and a lot of work. We know that the status quo isn’t working for too many people, and we can shape the future. So that soon, it’ll work for many more people—maybe for all.

Please note that this is cross-posted from Medium. Also, for full transparency, we work occasionally with Google.

Google’s new push to AI-powered services

G

At their Pixel 2 event at the beginning of the month, Google released a whole slew of new products. Besides new phones there were updated version of their smart home hub, Google Home, and some new types of product altogether.

I don’t usually write about product launches, but this event has me excited about new tech for the first time in a long time. Why? Because some aspects stood out as they stand for a larger shift in the industry: The new role of artificial intelligence (AI) as it seeps into consumer goods.

Google have been reframing themselves from a mobile first to an AI first company for the last year or so. (For full transparency I should add that I’ve worked with Google occasionally in the recent past, but everything discussed here is of course publicly available.)

We now see this shift of focus play out as it manifests in products.

Here’s Google CEO Sundar Pichai at the opening of Google’s Pixel 2 event:

We’re excited by the shift from a mobile-first to an AI-first world. It is not just about applying machine learning in our products, but it’s radically re-thinking how computing should work. (…) We’re really excited by this shift, and that’s why we’re here today. We’ve been working on software and hardware together because that’s the best way to drive the shifts in computing forward. But we think we’re in the unique moment in time where we can bring the unique combination of AI, and software, and hardware to bring the different perspective to solving problems for users. We’re very confident about our approach here because we’re at the forefront of driving the shifts with AI.
AI as a platform: Google has it.
First things first: I fully agree – there’s currently no other company that’s in as well positioned to drive the development of AI, or to benefit from it. In fact, back in May 2017 I wrote that “Google just won the next 10 years.” That was when Google just hinted at their capabilities in terms of new features, but also announced building AI infrastructure for third parties to use. AI as a platform: Google has it.

Before diving into some structural thoughts, let’s look at two specific products they launched:

  1. Google Clips are a camera you can clip somewhere, and it’ll automatically take photos when some conditions are met: A certain person’s face is in the picture, or they are smiling. It’s an odd product for sure, but here’s the thing: It’s fully machine learning powered facial recognition, and the computing happens on the device. This is remarkable for its incredible technical achievement, and for its approach. Google has become a company of high centralization—the bane of cloud computing, I’d lament. Google Clips works at the edge, decentralized. This is powerful, and I hope it inspires a new generation of IoT products that embrace decentralization.
  2. Google’s new in-ear headphones offer live translation. That’s right: These headphones should be able to allow for multi-language human-to-human live conversations. (This happens in the cloud, not locally.) Now how well this works in practice remains to be seen, and surely you wouldn’t want to run a work meeting through them. But even if it eases travel related helplessness just a bit it’d be a big deal.

So as we see these new products roll out, the actual potential becomes much more graspable. There’s a shape emerging from the fog: Google may not really be AI first just yet, but they certainly have made good progress on AI-leveraged services.

The mental model I’m using for how Apple and Google compare is this:


The Apple model

Apple’s ecosystem focuses on an integration: Hardware (phones, laptops) and software (OSX, iOS) are both highly integrated, and services are built on top. This allows for consistent service delivery and for pushing the limits of hardware and software alike, and most importantly for Apple’s bottom line allows to sell hardware that’s differentiated by software and services: Nobody else is allowed to make an iPhone.

Google started at the opposite side, with software (web search, then Android). Today, Google looks something like this:


The Google model

Based on software (search/discovery, plus Android) now there’s also hardware that’s more integrated. Note that Android is still the biggest smartphone platform as well as basis for lots of connected products, so Google’s hardware isn’t the only game in town. How this works out with partners over time remains to be seen. That said, this new structure means Google can push its software capabilities to the limits through their own hardware (phones, smart home hubs, headphones, etc.) and then aim for the stars with AI-leveraged services in a way I don’t think we’ll see from competitors anytime soon.

What we’ve seen so far is the very tip of the iceberg: As Google keeps investing in AI and exploring the applications enabled by machine learning, this top layer should become exponentially more interesting: They develop not just the concrete services we see in action, but also use AI to build their new models, and open up AI as a service for other organizations. It’s a triple AI ecosystem play that should reinforce itself and hence gather more steam the more it’s used.

This offers tremendous opportunities and challenges. So while it’s exciting to see this unfold, we need to get our policies ready for futures with AI.

Please note that this is cross-posted from Medium. Disclosure: I’ve worked with Google a few times in the recent past.

Some thoughts on Google I/O and AI futures

S

Google’s developer conference Google I/O has been taking place these last couple of days, and oh boy has there been some gems in CEO Sundar Pichai’s announcements.

Just to get this out right at the top: Many analysts’ reactions to the announcements were a little meh: Too incremental, not enough consumer-facing product news, they seemed to find. I was surprised to read and hear that. For me, this year’s I/O announcements were huge. I haven’t been as excited about the future of Google in a long, long time. As far as I can tell, Google’s future today looks a lot brighter still than yesterday’s.

Let’s dig into why.

Just as a quick overview and refresher, some of the key announcements (some links to write-ups included).

Let’s start with some announcements of a more general nature around market penetration and areas of focus:

  • There are now 2 billion active Android devices
  • Google Assistant comes to iOS (Wired)
  • Google has some new VR and AR products and collaborations in the making, both through their Daydream platform and stand-alone headsets

Impressive, but not super exciting; let’s move on to where the meat is: Artificial intelligence (AI), and more specifically machine learning (ML). Google announced a year ago to turn into an AI-first company. And they’re certainly making true on that promise:

  • Google Lens super-charges your phone camera with machine learning to recognize what you’re pointing the camera at and give you context and contextual actions (Wired)
  • Google turns Google Photos up to 11 through machine learning (via Lens), including not just facial recognition but also smart sharing.
  • Copy & paste gets much smarter through machine learning
  • Google Home can differentiate several users (through machine learning?)
  • Google Assistant’s SDK allows other companies and developers to include Assistant in their products (and not just in English, either)
  • Cloud TPU is the new hardware that Google launches for machine learning (Wired)
  • Google uses neural nets to design better neural nets

Here’s a 10min summary video from The Verge.

This is incredible. Every aspect of Google, both backend and frontend, is impacted by machine learning. Including their design of neural networks, which are improved by neural networks!

So what we see there are some incremental (if, in my book, significant) updates in consumer-facing products. This is mostly feature level:

  • Better copy & paste
  • Better facial recognition in photos (their “vision error rate of computer vision algorithms is now better than the human error rate”, says ZDNet)
  • Smarter photo sharing (“share all photos of our daughter with my partner automatically”)
  • Live translation and contextual actions based on photos (like pointing camera at wifi router to read login credentials and log you into the router automatically).
  • Google Home can tell different users apart.

As features, these are nice-to-haves, not must-haves. However, they’re powered by AI. That changes everything. This is large-scale deployment of machine learning in consumer products. And not just consumer products.

Google’s AI-powered offerings also power other businesses now:

  • Assistant can be included in third party products, like Amazon’s Alexa. This increases reach, and also the datasets available to train the machine learning algorithms further.
  • The new Cloud TPU chips, combined with Google’s cloud-based machine learning framework around TensorFlow means that they’re not in the business of providing machine learning infrastructure: AI-as-a-Service (AssS).

It’s this last point that I find extremely exciting. Google just won the next 10 years.

The market for AI infrastructure—for AI-as-a-Service—is going to be mostly Google & Amazon (who already has a tremendous machine learning offering). The other players in that field (IBM, and maybe Microsoft at some point?) aren’t even in the same ballpark. Potentially there will be some newcomers; it doesn’t look like any of the other big tech companies will be huge players in that field.

As of today, Google sells AI infrastructure. This is a mode that we know from Amazon (where it has been working brilliantly), but so far hadn’t really known from Google.

There haven’t been many groundbreaking consumer-facing announcements at I/O. However, the future has never looked brighter for Google. Machine learning just became a lot more real and concrete. This is going to be exciting to watch.

At the same time, now’s the best time to think about societal implications, risks, and opportunities inherent in machine learning at scale: We’re on it. In my work as well as our community over at ThingsCon we’ve been tracking and discussing these issues in the context of Internet of Things for a long time. I see AI and machine learning as a logical continuation of this same debate. So in all my roles I’ll continue to advocate for a responsible, ethical, human-centric approach to emerging technologies.

Full disclosure: I’ve worked many times with different parts of Google, most recently with the Mountain View policy team. I do not, at the time of this writing, have a business relationship with Google. (I am, however, a heavy user of Google products.) Nothing I write here is based on any kind of information that isn’t publicly available.

Google/Skybox could offer a searchable DIFF of the world

G

Since Google announced to buy satellite company Skybox recently, there’s been quite a bit of speculation about the reasons and potential implications of the acquisition. Some wondered about the emerging picture of a Google that owns military robots and drones and has access to information about both outside and inside our homes; others looked at how regularly updated satellite images could improve maps, or how a real-time map of, say, available parking spots might be possible with this technology. Or predictions about the market and economics developments. Wired speculates about Really Big Data and geopolitical forecasting.

Writes The Atlantic:

Right now, the raw imagery created by satellite cameras can be hard to decode and process for non-experts. Therefore, many companies like Skybox hope to sell “information, not imagery.” Instead of pixels, they’ll give customers algorithmically-harvested assessments of what’s in the pixels. For example, using regular satellite-collected data, an algorithm could theoretically look for leaks in an Arctic pipeline and alert the pipeline’s owners when one appeared.

This at least is one of the visions Skybox promotes in their videos:

 

 

It’s hard to tell how much of this is possible yet; I’d assume it’s nowhere near as complete now as it might seem. But it is a near-real time video feed of a large part of the surface of the world that – at some point – could be analyzed and converted into actionable data.

A searchable DIFF

And that’s where it gets really interesting: With this kind of technology, once it’s ready for prime time, Google could offer a complete over-time picture, a searchable visual and data representation – a DIFF of the world.

Imagine a cargo container, sitting in a dock, loaded onto a ship, the ship moving (and recognized by the image processing algorithms as such), the container being unloaded and put on a train, then a truck, then opened up and emptied. At any given time, you can trace (and trace back, if in hindsight it becomes interesting) how the tracked object has moved over time.

Live analysis combining a variety of data sources

Fast forward a few years and into version two of the toolkit (maybe) being built here. Then we’re looking at a much bigger picture. Assume a lot more processing power is now available to process, analyze, categorize and save the data available from the satellite images. Maybe enriched by other data sources, too. Now you can offer to pull together unforeseen searches on the real world as a service, similar to the way Wolfram Alpha lets you perform calculations by pulling together data from various sources – weather and traffic data; processed video feeds from drones; market and stock info; communications and network data, etc. – and combining them into one powerful analysis tool.

I find it hard to come up with good examples for this off the top of my head; let’s try anyway. Say you want to know how many trucks vs cars pass over a certain bridge. Or where to find the highest density of SUVs globally. Or the ratio of swimming pools per capita in LA compared to New Delhi compared to London. Or correlate the length of lines at bus stops to the local weather. Or want to know where your car ended up after it got stolen, and where the person went who stole it.

These examples are pretty weak, admittedly. But suffice to say that the range of applications – in commercial, military & security, social contexts – are enormous – ludicrously enormous – for good and evil alike.

Dear Google, this isn’t good enough

D

Connection Problem

Dear Google,

Trying to buy your devices internationally is an incredibly annoying experience.

With the Nexus line, you have demonstrated that you can build beautiful, useful devices and deliver a great user experience. Sadly, buying them outside the United States is the opposite.

About a year ago, when you launched the first Nexus tablets, I intented to buy a Nexus 7. First, it tooks weeks — if not months — before you launched the devices in Europe. Still, I signed up for launch notifications on the Google Nexus site. I never received a notification about the impeding launch.

On the launch day, the shop was sold out within mere minutes. I understand that selling out quickly is desirable from a press point of view. As a customer willing to pay for goods, it’s merely annoying.

A couple of months ago, you launched the new edition of the Nexus tablet line in the US, then some other countries. Since my old Nexus had been stolen in the meantime, I again intended to buy one. There was no hint when they would be available in Germany or many other countries. The German Nexus site still showed the old Nexus and made no mention of the new edition, which seems slightly misleading. Then, a few weeks later, the new Nexus was announced on the German site, too, but again, without a launch date. Instead there was, once more, a form to sign up to be notified about the launch. Again, no notification was sent.

I asked both friends working at Google and the Google Nexus support on Twitter (@googlenexus) for information on expected availability. My friends didn’t know anything; the customer support never replied on Twitter.

When I checked the site for updates, I noticed that once more the sales had started without notifications and — again — you hadn’t provided enough devices and sold out in no time. Once more, the device isn’t available, pre-orders aren’t possible, and there’s no statement of an intended date of availability.

Yes, I will buy a new Nexus when they become available despite my frustration. But I’m far from being a satisfied customer. As someone who’s always supported the Android ecosystem, this is very frustrating.

You need to get better at this to not make everyone outside the US feel like second or third class customers. You’re not a scrawny little startup with limited means to invest in an international roll-out. You’re Google. Start behaving like it.

This isn’t good enough.

Yours,

Peter

Recent reading (7 links for March 21)

R

matze_chinatown and manhattan bridge-025

Irregularly, I post noteworthy articles I recently read. Enjoy!

 

Nike, TechStars Unveil Startup Accelerator Winners
A quick overview over the first round of winners of the TechStars accelerator program focused on Nike+. – by Austin Carr (link)  

 

Why Wait For Google Fiber? UK Farmers Want Faster Internet, Build Their Own
A group of farmers self-organizing a 1GB fiber connection? Hell yeah. – by David J. Hill (link)  

 

Why I left Google
The quiet, calm tone of voice of this “why I left Google” story makes it almost touching. The core point, though, is an important one: “The Google I was passionate about was a technology company that empowered its employees to innovate. The Google I left was an advertising company with a single corporate-mandated focus.” – by James Whittaker (link)  

 

Everything you need to know about Moleskine ahead of its IPO
Moleskine has a profit margin of some 41%. That’s somewhere in the region of what Apple makes on hardware. Insane. – by Zachary M. Seward (link)  

 

Things publishers can’t do (yet)
Charlie Stross shares his – well informed – thoughts on how ebooks could and should be treated very differently from hard and soft cover paper editions. – by Charlie Stross (link)  

 

3 Ways To Make Wearable Tech Actually Wearable
Some solid ground rules on how to make better wearable tech. – by Jennifer Darmour (link)  

 

The Future of Product Design is Localized and Democratized
Hardware innovation & startups as the byproduct of an itch that needs scratching in your immediate area/peer group. – by Theodore Ullrich (link)  

Recent reading (5 links for March 4)

R

Rough diamond

Irregularly, I post noteworthy articles I recently read. Enjoy!

 

21 Thoughts on Lab Development
The good folks over at Undercurrent shared some insights on how to build in-house labs. – by Clay Parker Jones (link)  

 

Kill the Password: Why a String of Characters Can’t Protect Us Anymore
“You have a secret that can ruin your life.” WIRED makes the case for abandoning the password altogether. – by Mat Honan (link)  

 

The Google Glass feature no one is talking about
It’s not about the experience of wearing Google Glass, but of other people wearing Google Glass. Good points all ’round. – by Mark Hurst (link)  

 

It’s the Sugar, Folks
Slightly off-topic for this blog, but boy, does sugar sound bad. – by Mark Bittman (link)  

 

Putting people together to create new products
Great advice on how to build a good team to build a product. – by Jonathan Korman (link)