Collapsing Context
The circumstances in which software is used are important
As someone who’s created a lot of internal tools, among my long-held heuristics for asking does this problem really require custom software is can this just be a Google Form? You’d be surprised how often asking questions like that in a discussion of a problem can help people pare a problem down to its essentials, and a couple of times the answer to that question was yes, of course, what were we thinking?
Some years ago that solution became less feasible to a certain class of problems because Google started requiring people to be signed in to a Google account in order to fill out a form. Reportedly to prevent abuse. You might be inclined to give them the benefit of the doubt on this, except, they’ve changed the sign-in process in some subtle ways recently.
Anyway I tried to fill out a Google Form this past weekend.
Some years ago I had setup mutli-factor authentication on my main Google account using time-based one-time passwords (TOTP), which, of course, Google presents their Authenticator app as the way instead of one of many, and figured that was good for a while.
Security on the internet is of course an ever-involving process, with things like hardware security keys and passkeys bringing some much-needed improvement to the area. Meanwhile, many institutions are still sending one-time passwords over SMS, presumably because they can’t get their CEO to use something more secure. Some of these have been shifting authentication for non-mobile clients, such as their website, into push notification authentication when they know you have their mobile app installed. On the whole, I believe this is a simpler experience for a lot of people.
But in-app prompts only really work as viable authentication mechanisms in certain contexts. I would consider it acceptable for my bank to validate a new sign-in through a prompt in their app. It’s better than the SMS codes they were sending before, and the bank’s app is something I have always had good security hygiene around. Do you know what app I don’t really treat with a holds-the-keys-to-the-kingdom, give-me-access-to-email-and-everything-else level of security? Youtube.
And yet, this weekend when I went to sign in to my Google account to fill out a Google Form, I was presented with:

The horrendously bad copy alone would be worth skewering.
That’s funny, I was expecting to use my TOTP to validate my signin. Not some app that I use for entertainment.
I’m really curious about the decision-making that went into deciding it’s OK to authenticate people via push notification to the Youtube app. I’ve explicitly setup TOTP on my account, and I can’t even remove it without adding something like a passkey in its place. I’m curious if Google pulls this because they know I’m not using their authenticator app.
Is this a branding decision? Some… user engagement hack? Did Google rebrand to Youtube while I wasn’t looking? I wanted to understand what was really going on here, so I went to see what other options I had in this brave new world of authentication they’ve created, but I didn’t get a chance.
The signin had been approved, because I had attempted it during the very brief period of time my kid gets to watch LEGO physics videos in the YouTube app on our AppleTV, which had prompted over the running video to sign me in. My kid approved my signin to get back to their video.
I wonder if this ever came up in discussions about this feature? Who decided this was a good idea? Who implemented it, in good conscience?
The context in which I use YouTube is not the same as the context in which I use my bank’s app, and yet if I still trusted Google and hosted my email with them, those contexts would effectively collapse. Unbeknownst to me, my TV had become a vector for an account takeover attack because somebody else misunderstands how I use their software.
It’s been a long time since I felt like I could trust Google for anything important — I figure it’s only a matter of time before anything besides AdWords ends up in the Google Graveyard – but now I simply can’t trust them with anything important. Not when they’re willing to make my TV into a skeleton key to my account for some branding exercise.
My observation of the design culture at Google is that it’s both paralyzed by data, exemplified by the infamous testing of 41-shades of blue, and promotion-driven development, and this misuse of the YouTube app into the chain of security around my account smells of branding-driven promotion-driven decision-making.
As I have said here before and will inevitably say again, design is how it works. All software is used in a context, where circumstances are everything, and when dealing with security those circumstances are very important. If you abdicate that decision-making to A/B testing, committees, or branding, you cannot be a responsible custodian of whatever your users are trusting you with. Do better. Before you go making changes to how something works, stop to consider how that something is already being used.
Google says they require people to login to complete a Form in order to prevent abuse, but after this expereicne with signing in, how could I possibly believe that, when an easier answer is they just want more data about me?