Anthropomorphizing the Algorithm: Animism in the Age of AI | Association of Certified Electronic Discovery Specialists (ACEDS)


In many ancient cultures, people believed that everything has a spirit. Every human, animal, plant and even every rock and lightning was understood as we understand ourselves. If we eat because we are hungry, an animal should feel something similar when it eats. We still have a hunch about this today (it’s called empathy), so we can’t judge our ancestors too harshly to think that lightning must feel something similar to anger when it does. hurt one of us.

Today we tend to apply this kind of thinking to computers and algorithms. After all, computer programming is the scary voodoo of the day, especially machine learning where even wizards can’t claim to fully understand it. In the absence of a good framework to guide our intuitions, we do what humans do and sympathize, anthropomorphizing algorithms. We say “the algorithm thinks X” or “the algorithm wants X” as if the computers were rocks that we made to think with lightning. (Isn’t it? And is it mean to the rocks?)

This isn’t always problematic: Finding ways to relate new ideas to concepts we already understand is often a critical part of human learning. However, sometimes the hunches about “AI” algorithms are wrong, people imagine that software works in a way that it doesn’t, and users end up with disastrous results just because they do. have fallen into the trap of anthropomorphizing something they don’t understand. Algorithms certainly don’t think like us, and neither can they examine their own way of thinking like us. Just because you can figure something out doesn’t mean AI can.

For example, consider an eDiscovery use case where attorneys try to train a model (or series of models) to find emails related to allegations specified in a civil complaint. Lawyers look at various documents and qualify them as reactive or not. The algorithm is here to help you! He can read text and will make educated guesses about labels. The algorithm must learn that lawyers want to find all emails about a particular product but aren’t interested in football or opera emails.

Imagine an email Bob sent to Charlie that said, “Do you want lunch at noon?” Based on past experience, The Algorithm predicts that this document has nothing to do with the case. Most of the emails Bob sent to Charlie had nothing to do with the product, no other lunch emails had anything to do with the product, and all of the other product-related meetings had at 10:00 a.m. He saw dozens of other emails from Bob to Charlie with almost the exact same text that were tagged as irrelevant.

Every lawyer reviewing the documents in this case knows that Bob and Charlie had the idea for the relevant product at that time. particular lunch, so that this email responds unmistakably to opposing counsel’s document requests. Stupid algorithm! Duh, this document is obviously relevant. Everyone knows, didn’t he listen to Bob tell us last week?

(But before you criticize the algorithm too much, note that a keyword search for the product name would never have caught that relevant document either.)

We don’t realize that (today’s) AI doesn’t have access to all the data we do. The data to which we have access is much more diverse. We have all of our life experience and all the inferences we can draw from it. We have access to the collective knowledge of our colleagues – a little expression can tell you if a statement you made was interesting, questionable or taboo. AI may have learned from ‘big data’, but imagine if you could only learn words and not pictures, feelings or experiences. It would be a little more difficult to contextualize. And without context, they’re easily confused by inconsistent data. (For the record: I’m here with you, algorithm.) We benefit from our ability to think and think. We can gather more evidence when things seem contradictory, evidence from sources – like talking to people – that AI can’t access.

Another error would be an excessive pedagogical simplification. Unlike children, AI doesn’t benefit from oversimplifying, intentionally mislabeling your data to “simplify” the task is subtle and possibly counterproductive.

Take the example of a well-meaning AI consultant warning reviewers to label documents at all corners of the document. She explains that the AI ​​will be confused if lawyers tell her a document is relevant for reasons it can’t understand. These lunch emails are a puzzle! They don’t have the product name anywhere, so how would the AI ​​understand they’re relevant? The lawyer may think he should teach the AI ​​that these emails are irrelevant even though the lawyers know they are. Whoops, if most of the lunch emails were relevant, the AI ​​would have figured out that was an important word. Instead, we got it confused by thinking about our own thinking.

Now imagine a slightly different scenario where Bob and Charlie meet frequently for lunch to discuss the product. Most reviewers were attentive when the case team explained that they were legally required to produce such documents. Even though a few attorneys characterized some of these emails as non-responsive, most of the team got them right and the algorithm continues to insist that they are responsive. Good algorithm! It helps to correct human errors.

(And who would have guessed that the search term “Bob AND Charlie AND lunch” would have yielded so many responsive documents?)

But of course, the algorithm is not a conscious being with goals, desires, or critical thinking abilities. It was designed and manufactured for a specific purpose, and many factors determine its limitations. If we are to imagine AI algorithms as sentient creatures, try to keep the following in mind:

  • They can only learn from examples. Sometimes they need a lot of examples. If too many examples are inconsistent or contradictory, they become confusing.
  • They can’t pay attention to evidence that isn’t actually in the examples given to the algorithm. If it’s not in the dataset or if the developers don’t explicitly populate it, the algorithm can’t use it.
  • They should be tested and evaluated every time you use them on new data. It won’t necessarily work well this time around just because it worked well last time around. And as always, pay attention to the confidence interval, not the point estimate.

Please don’t take any of this too literally. There are many learning algorithms and different approaches to feature extraction. Every product works differently, and it’s important to understand what works and what doesn’t for the product you are using. Hopefully the developers catch up and find ways to predict almost everything. Then humans just have to worry about being consistent with each other and with themselves. (Why is this so difficult for us? Dear economists, how rational are we really?)


Source link

Previous Leading AI Chipmaker Hailo Partners with NXP to Launch High-Performance, Scalable AI Solutions for the Automotive Industry
Next today in history | OMCP News