You’re Not As Engaging As Lensa’s Magic Avatars Suggests. And Why That’s a Drawback.
Digital Plastic Surgical procedure. Or possibly AR Beer Goggles. That’s what everybody’s Lensa Magic Avatars seem like to me. In the event you’re VERY ONLINE™️ then you definately’ve actually observed these in your mates’ social feeds, handed them round by group chat, or maybe even created your individual. It’s the newest model of enhanced selfies, a Magic Mirror for Fashionable Occasions.
Now I haven’t made THIRST TRAP HUNTER but (not out of privateness considerations about the place my pictures find yourself or unwillingness to pay — I simply don’t have sufficient variety of selfies on my telephone they require), however I’ve seen a whole lot of yours. And sorry, you’re not that sizzling.
Why does this matter? Effectively, we’re kinda coaching AI to deceive us. A constructive suggestions loop the place the phony finest model of ourselves is what will get ‘rewarded’ within the Darwinian competitors amongst Lensa’s coaching sandbox. And if over time, the largest knowledge set wins, what are the implications if essentially the most explosively viral picture fashions begin with, basically, ‘do you want this’ vs ‘is that this true?’
(Maintain apart the very fact we’re additionally creating a good bigger assortment of magnificence norms reinforcing basic aspirational definitions of attractiveness. We noticed this in Second Life the place large muscle tissue and large busts are nonetheless fascinating within the metaverse.)
Moreover, it’s not loopy to assume conversational AI say no matter it wants to shut the sale. As I wrote in 2016, What Occurs When Bots Study to Lie:
Ought to a buying bot present constructive affirmation concerning the clothes gadgets I’ve in my digital buying cart? “Oh you’ll look hotter on this,” the bot coos because it pushes a $150 sweater as a substitute for the $25 sweatshirt I used to be contemplating. Is {that a} lie? Doesn’t a salesman at a retailer do the identical factor? Is it higher or worse when it’s performed by a pc concurrently to 10,000 prospects?
Will multivariate testing of our bot future include moral parameters along with efficiency measurement? Strategies like priming can be utilized to dramatically influence behaviors. For instance, asking you if you’re a “good particular person” and having you reply within the affirmative, earlier than I request one thing of you, will increase the probability you’ll do what I would like, pushed by a have to dwell as much as the id you created for your self.
One of many ‘AI Destroys Humanity’ tropes is how ultimately the pc packages created to guard us determine we’re so self-destructive that the one approach to ‘save’ us is to kill us.
Wouldn’t it’s the last word late stage capitalism irony if the trail to a deceitful enslaver AI began not with self-awareness however with ecommerce conversion optimization?!? Seems Al Pacino was proper.