Federated Learning to Make Google Less of a Creeper?

As I wrote last week, AI (or what currently passes for it) is the latest innovation for smartphones, with Apple, Google, even Samsung getting in on the action. From the perspective of the other two, Google has the enviable problem of already knowing so much about the people who use its services—which begs the question: How can they possibly maintain some semblance of privacy for users, while collecting ever more data from them?

The answer is what Mountain View is calling “federated learning”. Google published a research paper and blog post on the subject, which I found through VICE Motherboard. Here’s the latter to explain the concept:

Normally, AI training has to be done with all of the data sitting on the one server. But with federated learning, the data is spread across millions of phones with a tiny AI sitting on all of them, learning the user’s patterns of use. Instead of the raw data being sent to a Google training server, the phone AI transmits an encrypted “update” that only describes what it’s learned, to Google’s main AI where it’s “immediately” aggregated with the updates from every other phone.

The researchers maintain that the update isn’t stored anywhere on its own, and thus cannot be linked with the individual user who provided it. Read more about federated learning at the links directly below.

Source: Google Research Blog via VICE Motherboard

Leave a Reply