When I started this segment, my plan was to document the good tabs I closed this week, probably 4-6 things. This was a busy week, and rather than try to parcel things out, I’ve got a bunch of stuff about AI, mostly critical.

Two recurring themes through several of these pieces are the “black box” nature of LLM algorithms, and the ethics of using the technology. I consult for technology problems involving insight into how complex systems work, how to mistake-proof things, and how to approach technology interactions ethically, and for the first time in a while, I have availability. I’m particularly keen on algorithmic visibility work.

Let’s talk about recognizing AI art

The biggest tell, imo, the single biggest giveaway, is intention. People make decisions when they create art. They make thousands of decisions, and last I checked, nobody’s making AI prompts that complicated.

Why AI Isn’t Going to Make Art

Generally I will recommend anything written by Ted Chiang

the question becomes: Is there a similar opportunity to make a vast number of choices using a text-to-image generator? I think the answer is no. An artist—whether working digitally or with paint—implicitly makes far more decisions during the process of making a painting than would fit into a text prompt of a few hundred words

AI and the American Smile

Why do you smile the way you do? A silly question, of course, since it’s only “natural” to smile the way you do, isn’t it? It’s common sense. How else would someone smile?

AI worse than humans in every way at summarising information, government trial finds

These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

Kids who use ChatGPT as a study assistant do worse on tests

ChatGPT also seems to produce overconfidence. In surveys that accompanied the experiment, students said they did not think that ChatGPT caused them to learn less even though they had. Students with the AI tutor thought they had done significantly better on the test even though they did not.

AI Checkers Forcing Kids To Write Like A Robot To Avoid Being Called A Robot

But, simply inserting a sketchy “AI checker” in the process seems likely to do more harm than good. Even if the teacher isn’t guaranteed to be using the tool, just the fact that it’s there creates a challenge for my kid who doesn’t want to risk it. And it’s teaching them to diminish their own writing skills in order to convince the AI-checker that the writing was done by a human.

Do large language models have a legal duty to tell the truth?

Just as how lawmakers seem to think “Encryption that only the good guys can break” is just a matter of making the nerds math harder, I think we’re going to see a similar call for LLMs to tell “the truth,” and this paper examines the legal aspects of that.

When paired with automation bias and technology bias, or the human tendency to attribute superior capabilities to technology, these trends point towards a new type of epistemic harm that emerges through the proliferation of trusted but epistemologically flawed machine-generated content in human discourse, beliefs, culture and knowledge.

The LLM honeymoon phase is about to end

The limitations of this practice are clear. The prompts, adversarial prompts, counter-prompts all grow like kudzu until each query has preamble to rival that of a peak-cocaine Stephen King novel.

Shady Firms Say They’re Already Manipulating Chatbots to Say Nice Things About Their Clients

You thought SEO was bad? Hang on tight.

Given what black box AI models already are, all of these questions remain precariously unclear. What’s certain, though, is that malleable chatbots are gaining a footprint in the way that the internet is sorted, navigated, and managed — and what that means for the business of the digital world at large is just beginning to unfold.

Does AI Benefit the World?

This is a long piece, heavy on the sort of ethics that business leaders lack.

I ask my colleague if the internet has improved the world. He replies “You and I are talking right now using it, so yes.” I point out “you and me. The top 2% of privilege. How many have the opportunity to interact like this?”

Is AI a Silver Bullet?

A key problem here is that productivity improvements typically address only the second step, turning a design into code, and ignore the first step creating the model.

I have a lot more to say on the topics of Big-A vs little-a agile, and modeling things from perspectives outside web development, but those are for other times.

LLMs won’t save labor when you use them like this

But more than that, the LLM is not considering the performance implications of spread vs. push because the LLM cannot consider anything. It’s a loom that generates patterns of information, endlessly.

Why AI projects fail

No we want a magic tool to make the problem disappear. Which is a significantly different thing than solving it.

An Age of Hyperabundance

A long read worth your leisurely attention.

The CEO regarded us with satisfaction for his chatbot’s work: that, through a series of escalating tactics, it had convinced a woman to end her dog’s life, though she hadn’t wanted to at all. “The point of this story is that the woman forgot she was talking to a bot,” he said. “The experience was so human.”

How I’m Trying to Use Generative AI as a Journalism Engineer — Ethically

This year I have been working out how I can use AI in a way that gets around some of these issues, with limited success. The credo I’ve settled on is “Run locally and verify.”

AI chatbots might be better at swaying conspiracy theorists than humans

And finally, perhaps a genuinely useful usecase?

This chatbot approach proved effective for a wide range of conspiracy theories, from classics like the JFK assassination, Moon landing hoaxes, and the Illuminati, to more recent examples like the 2020 election fraud claims and COVID-19.