the symbols of AI

In college, I took a memorable “Symbols and Consciousness” class with Professor James Peacock, a fascinating and brilliant man. He’d stumble into class ready to have a true dialogue rather than sit back and lecture. Symbols inherently raise consciousness and we talked about how symbols show up in society, from team colors to political affiliations. The result is that I’ve never looked at icons the same way and, in my day to day work, run into debates around icons regularly, sometimes in a big way most recently with the Font Library feature or in smaller UI moments.

All of this brings me to the pervasive use of some variation of a sparkle icon for any injection of artificial intelligence✨ ❇️.

Who decided this? Why did ✨ come to mind first? What is it trying to convey? I’d guess some level of magic, approachability, newness, etc even as there are also untold powers with AI. I find it telling that we dress it up in this icon without warnings around misinformation or misuse. Note, I say “we” because one of the screenshots above is from Jetpack’s AI writer! I work for Automattic and used to work on Jetpack.

These icons are intentional and the decisions fascinate me, especially having just helped a 91 year old try to get away from all of this ✨magic✨ thrust upon her. My girlfriend’s grandma is incredibly tech literate so, when she mentioned that when she searches all sorts of useless information comes up in a steady stream that she can’t seem to escape from, I was curious to find out what was going on. While she does run into some user errors here and there, like when her browser zoom settings got changed, she does a solid job of navigating tech. I asked her to replicate the problem for me and she sat down at her dell computer, opened the built in search, typed “how to fix my tv”, used the option to see the full results outside of the smaller modal in a browser, and as she scrolled to see the answers AI jumped in. Almost immediately, she was flooded by copilot, a chatbot powered by AI, literally spewing information faster than anyone could read it and filling up her screen making it hard to even scroll. While I don’t have a full video, here’s a sense of it. Imagine you’re on a normal search results page, you’re scrolling to find what you want to read, and then this opens up without your consent:

I’m “cheating” a bit but this is closer to the experience where one is starting to scroll down to see the search results you can clearly see only to be overtaken by a bot beginning to rapidly share information:

The resulting information was more annoying than helpful, a byproduct of the chatbot trying to be proactive rather than assuming the person searching could find their way. I also don’t think it’s a good UX to show the information forming before your eyes and would rather see a cute loading animation instead before being presented with the results. Perhaps to even initiate the “chat” aspect of it, it could also ask some follow up questions to guide the response before just making assumptions resulting in unhelpful information being displayed.

I spent at least an hour trying to disable AI integration entirely after the browser (Microsoft Edge) settings refused to honor my decision. It was absolutely exasperating and everywhere I looked I kept seeing overly cheerful AI popping up to run me over with information. I’ll spare the details but I managed to limit the integration within her entire setup and eventually made the decision to teach her how to get back to searching whenever she’s thrown into the chat, knowing Microsoft was inevitably going to try to push this wherever they could. Even by the end, some searches (not all) that I did would still result in copilot opening up and I couldn’t figure out the pattern that caused it to happen. I explained that this works to Microsoft’s benefit the more they have folks use it, gathering more data and training their LLMs.

This cheerful, sparkle, overly helpful chatbot experience with a 91 year old woman both reinforced to me how important consent/opting in is for an excellent user experience and how decisions made around AI “symbols” feel incredibly off. I can imagine a different experience for her where she’s actually excited about chatting with AI, peppering it with questions, and choosing to engage. Right now, without the choice nor understanding of what it’s doing, it feels incredibly intrusive and frankly annoying.

My thoughts are still brewing and there are some important connection points to avoids situations where you overly trust an AI that might be hallucinating and avoid abusing the tech, especially in scenarios where you might first interact with a chatbot before being transferred to a human. For the latter, this can expose real humans to a wild amount of harm especially if the bot gets folks riled up (similar to what you find with triaging systems nowadays on the phone where you work with an automated system first to try to solve your problems).

Discover more from agm

Subscribe to get the latest posts sent to your email.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

From the blog

Follow along

Receive a friendly ping when new content is out.