I<tab-complete> welcome our new robot overlords.

Hoisted from a recent email exchange with my friend Gordon Shippey:

Re: Whassap?
Gordon:
Sounds like a plan.
(That was an actual GMail suggested response. Grumble-grumble AI takeover.)

Anthony:
I<tab-complete> welcome our new robot overlords.

I am constantly amazed by the new autocomplete. While, anecdotally, autocorrect of spell checking is getting worse and worse (I blame the nearly-universal phenomenon of U-shaped development, where a system trying to learn new generalizations gets worse before it gets better), I have written near-complete emails to friends and colleagues with Gmail’s suggested responses, and when writing texts to my wife, it knows our shorthand!

One way of doing this back in the day were Markov chain text models, where we learn predictions of what patterns are likely to follow each other; so if I write “love you too boo boo” to my wife enough times, it can predict “boo boo” will follow “love you too” and provide it as a completion. More modern systems use recurrent neural networks to learn richer sets of features with stateful information carried down the chain, enabling modern systems to capture subtler relationships and get better results, as described in the great article¬† “The Unreasonable Effectiveness of Recurrent Neural Networks“.

-the<tab-complete> Centaur

 

Leave a Reply

Your email address will not be published. Required fields are marked *