Microsoft has obtained a patent to turn you into a chatbot

Illustration for the article Microsoft got a patent to turn you into a chatbot

Photo: Stan Honda (Getty Images)

What if the most significant measure of the work of your life has nothing to do with lived experiences, only the unintentional generation of a realistic digital clone of you, a specimen of ancient man for the amusement of the people of the year 4500, long after you left? this mortal coil? This is the least awful question raised by a recent Microsoft patent for an individual chatbot.

First observed by the independent, The U.S. Patent and Trademark Office confirmed to Gizmodo by email that Microsoft does not yet have permission to manufacture, use or sell the technology, only to prevent others from doing so. The patent application was filed in 2017, but just approved last month.

Hypothetical Chatbot You (imagined in detail here) would be instructed on “social data”, which includes public posts, private messages, voice and video recordings. It could take 2D or 3D shape. It could be a “past or present entity”; a “friend, a relative, an acquaintance, [ah!] a celebrity, a fictional character, a historical figure ”and, unfortunately,“ a random entity ”. (The last one, we could guess, could be a talking version of the library of photorealistic portrait generated by cars ThisPersonDoesNotExist.) Technology can allow you to register in a “certain phase of life” to communicate with your young person in the future.

Personally, I like that my chatbot would be useless due to my limited text vocabulary (“omg” “OMG” “OMG HAHAHAHA”), but Microsoft’s minds considered this. The chatbot can form opinions that you don’t have and answer questions you never asked. Or, in the words of Microsoft, “one or more conversational data stores and / or APIs may be used to answer user dialogue and / or questions for which social data does not provide data.” Complementary comments could be guessed through data collected from people with aligned interests and opinions or demographic information, such as gender, education, marital status, and income level. You might imagine your opinion on an issue based on “crowd-based perceptions” of events. “Psychographic data” is on the list.

In short, we look at a monster of Frankenstein’s machine learning, which resurrects the dead through an uncontrolled collection of very personal data.

“It’s awful,” says Jennifer Rothman, a law professor at the University of Pennsylvania and author The right to publicity: privacy reinvented for a public world Gizmodo told him by e-mail. If it is insurance, such a project sounds like legal agony. She predicted that such technology could lead to disputes over the right to privacy, the right to publicity, defamation, misrepresentation, trademark infringement, copyright infringement and false approval “to name a few”. she said. (Arnold Schwarzenegger traced the territory with this head.)

She continued:

It could also violate biometric privacy laws in states such as Illinois that have them. Assuming that the collection and use of data is allowed and people say yes to creating a chatbot in their own image, technology still raises concerns if such chatbots are not clearly delineated as imitators. We can also imagine a series of abuses of technology similar to those we see with the use of deepfake technology – probably not what Microsoft would plan, but still this can be anticipated. Convincing but unauthorized chatbots could create national security issues if a chatbot, for example, is supposed to speak for the president. And we can imagine that unauthorized celebrity chats could proliferate in ways that could be sexually or commercially exploitative.

Rothman mentioned that although we have realistic puppets (deepfakes, for example), this patent is the first he has seen that combines such technology with data collected through social networks. There are several ways in which Microsoft could alleviate concerns with varying degrees of realism and clear waivers. Incarnation, because Clippy, her clip, could help.

It is unclear what level of consent would be required to compile enough data, even for the most unpleasant digital wax, and Microsoft has not shared the potential guidelines of the user agreement. But additional likely laws governing data collection (California Consumer Privacy Act, EU General Data Protection Regulation) could throw a spanner in the chatbot. On the other hand, Clearview AI, which is notoriously offering facial recognition software to law enforcement and private companies, is currently litigating its right to generate money from its depository. billions of avatars extracted from public social media profiles without the consent of users.

Lori Andrews, a lawyer who helped inform the guidelines for the use of biotechnology, he imagined an army of evil twins. “If I ran for office, the chatbot could say something racist as if it were me and would frustrate my election prospects,” she said. “The chatbot could have access to various financial accounts or reset my passwords (based on conglomerate information, such as the name of a pet or the mother’s maiden name, which are often accessible on social media). A person could be misled or even injured if their therapist took a two-week vacation, but a chatbot that mimics the therapist continued to provide and bill services without the patient knowing about the switch. ”

Hopefully this future will never come true, and Microsoft has given some recognition that the technology is terrifying. When asked for a comment, a spokesperson directed Gizmodo to a tweet from Tim O’Brien, CEO of AI Programs at Microsoft. “I look at this – the date of application (April 2017) predates the AI ​​ethics reviews we are doing today (I sit on the panel) and I am not aware of any construction / shipping plan (and yes, it is annoying). ”

.Source