Taganalysis

Google’s new push to AI-powered services

G

At their Pixel 2 event at the beginning of the month, Google released a whole slew of new products. Besides new phones there were updated version of their smart home hub, Google Home, and some new types of product altogether.

I don’t usually write about product launches, but this event has me excited about new tech for the first time in a long time. Why? Because some aspects stood out as they stand for a larger shift in the industry: The new role of artificial intelligence (AI) as it seeps into consumer goods.

Google have been reframing themselves from a mobile first to an AI first company for the last year or so. (For full transparency I should add that I’ve worked with Google occasionally in the recent past, but everything discussed here is of course publicly available.)

We now see this shift of focus play out as it manifests in products.

Here’s Google CEO Sundar Pichai at the opening of Google’s Pixel 2 event:

We’re excited by the shift from a mobile-first to an AI-first world. It is not just about applying machine learning in our products, but it’s radically re-thinking how computing should work. (…) We’re really excited by this shift, and that’s why we’re here today. We’ve been working on software and hardware together because that’s the best way to drive the shifts in computing forward. But we think we’re in the unique moment in time where we can bring the unique combination of AI, and software, and hardware to bring the different perspective to solving problems for users. We’re very confident about our approach here because we’re at the forefront of driving the shifts with AI.
AI as a platform: Google has it.
First things first: I fully agree – there’s currently no other company that’s in as well positioned to drive the development of AI, or to benefit from it. In fact, back in May 2017 I wrote that “Google just won the next 10 years.” That was when Google just hinted at their capabilities in terms of new features, but also announced building AI infrastructure for third parties to use. AI as a platform: Google has it.

Before diving into some structural thoughts, let’s look at two specific products they launched:

  1. Google Clips are a camera you can clip somewhere, and it’ll automatically take photos when some conditions are met: A certain person’s face is in the picture, or they are smiling. It’s an odd product for sure, but here’s the thing: It’s fully machine learning powered facial recognition, and the computing happens on the device. This is remarkable for its incredible technical achievement, and for its approach. Google has become a company of high centralization—the bane of cloud computing, I’d lament. Google Clips works at the edge, decentralized. This is powerful, and I hope it inspires a new generation of IoT products that embrace decentralization.
  2. Google’s new in-ear headphones offer live translation. That’s right: These headphones should be able to allow for multi-language human-to-human live conversations. (This happens in the cloud, not locally.) Now how well this works in practice remains to be seen, and surely you wouldn’t want to run a work meeting through them. But even if it eases travel related helplessness just a bit it’d be a big deal.

So as we see these new products roll out, the actual potential becomes much more graspable. There’s a shape emerging from the fog: Google may not really be AI first just yet, but they certainly have made good progress on AI-leveraged services.

The mental model I’m using for how Apple and Google compare is this:


The Apple model

Apple’s ecosystem focuses on an integration: Hardware (phones, laptops) and software (OSX, iOS) are both highly integrated, and services are built on top. This allows for consistent service delivery and for pushing the limits of hardware and software alike, and most importantly for Apple’s bottom line allows to sell hardware that’s differentiated by software and services: Nobody else is allowed to make an iPhone.

Google started at the opposite side, with software (web search, then Android). Today, Google looks something like this:


The Google model

Based on software (search/discovery, plus Android) now there’s also hardware that’s more integrated. Note that Android is still the biggest smartphone platform as well as basis for lots of connected products, so Google’s hardware isn’t the only game in town. How this works out with partners over time remains to be seen. That said, this new structure means Google can push its software capabilities to the limits through their own hardware (phones, smart home hubs, headphones, etc.) and then aim for the stars with AI-leveraged services in a way I don’t think we’ll see from competitors anytime soon.

What we’ve seen so far is the very tip of the iceberg: As Google keeps investing in AI and exploring the applications enabled by machine learning, this top layer should become exponentially more interesting: They develop not just the concrete services we see in action, but also use AI to build their new models, and open up AI as a service for other organizations. It’s a triple AI ecosystem play that should reinforce itself and hence gather more steam the more it’s used.

This offers tremendous opportunities and challenges. So while it’s exciting to see this unfold, we need to get our policies ready for futures with AI.

Please note that this is cross-posted from Medium. Disclosure: I’ve worked with Google a few times in the recent past.

Google/Skybox could offer a searchable DIFF of the world

G

Since Google announced to buy satellite company Skybox recently, there’s been quite a bit of speculation about the reasons and potential implications of the acquisition. Some wondered about the emerging picture of a Google that owns military robots and drones and has access to information about both outside and inside our homes; others looked at how regularly updated satellite images could improve maps, or how a real-time map of, say, available parking spots might be possible with this technology. Or predictions about the market and economics developments. Wired speculates about Really Big Data and geopolitical forecasting.

Writes The Atlantic:

Right now, the raw imagery created by satellite cameras can be hard to decode and process for non-experts. Therefore, many companies like Skybox hope to sell “information, not imagery.” Instead of pixels, they’ll give customers algorithmically-harvested assessments of what’s in the pixels. For example, using regular satellite-collected data, an algorithm could theoretically look for leaks in an Arctic pipeline and alert the pipeline’s owners when one appeared.

This at least is one of the visions Skybox promotes in their videos:

 

 

It’s hard to tell how much of this is possible yet; I’d assume it’s nowhere near as complete now as it might seem. But it is a near-real time video feed of a large part of the surface of the world that – at some point – could be analyzed and converted into actionable data.

A searchable DIFF

And that’s where it gets really interesting: With this kind of technology, once it’s ready for prime time, Google could offer a complete over-time picture, a searchable visual and data representation – a DIFF of the world.

Imagine a cargo container, sitting in a dock, loaded onto a ship, the ship moving (and recognized by the image processing algorithms as such), the container being unloaded and put on a train, then a truck, then opened up and emptied. At any given time, you can trace (and trace back, if in hindsight it becomes interesting) how the tracked object has moved over time.

Live analysis combining a variety of data sources

Fast forward a few years and into version two of the toolkit (maybe) being built here. Then we’re looking at a much bigger picture. Assume a lot more processing power is now available to process, analyze, categorize and save the data available from the satellite images. Maybe enriched by other data sources, too. Now you can offer to pull together unforeseen searches on the real world as a service, similar to the way Wolfram Alpha lets you perform calculations by pulling together data from various sources – weather and traffic data; processed video feeds from drones; market and stock info; communications and network data, etc. – and combining them into one powerful analysis tool.

I find it hard to come up with good examples for this off the top of my head; let’s try anyway. Say you want to know how many trucks vs cars pass over a certain bridge. Or where to find the highest density of SUVs globally. Or the ratio of swimming pools per capita in LA compared to New Delhi compared to London. Or correlate the length of lines at bus stops to the local weather. Or want to know where your car ended up after it got stolen, and where the person went who stole it.

These examples are pretty weak, admittedly. But suffice to say that the range of applications – in commercial, military & security, social contexts – are enormous – ludicrously enormous – for good and evil alike.