She grew even more suspicious when she received an identical LinkedIn message - including the same emojis - from someone else claiming to be a RingCentral employee and whose profile photo also looked computer-generated. And "NYU's records do not reflect anyone named Keenan Ramsey receiving an undergraduate degree of any type," university spokesperson John Beckman told NPR.ĭiResta initially thought Ramsey's message might be a phishing attempt - trying to trick her into revealing sensitive information. Neither does Language I/O, one of the previous employers she listed. RingCentral doesn't have any record of an employee named Keenan Ramsey. To confirm whether Ramsey was indeed a "fake person," NPR dug into the background described on her LinkedIn profile. "But all of a sudden, here was a fake person in my inbox reaching out to me." "In the course of my work, I look at a lot of these things, mostly in the context of political influence operations," DiResta said. Today, websites allow anyone to download computer-generated faces for free. The technology most likely used to create Ramsey's photo, known as a generative adversarial network, or GAN, has been around only since 2014, but in that time has rapidly become better at creating lifelike faces by training on large datasets of real people's photos. "The positioning of the features in the face is something where if you've seen these enough times, you just become familiar with it," DiResta said. "At the end of the day it's all about making sure our members can connect with real people, and we're focused on ensuring they have a safe environment to do just that." We are constantly updating our technical defenses to better identify fake profiles and remove them from our community, as we have in this case," LinkedIn spokesperson Leonna Spilman said in a statement. "Our policies make it clear that every LinkedIn profile must represent a real person. LinkedIn did not give details about how it conducted its investigation. He worries that the proliferation of AI-generated content could augur a new era of online deception, using not just still images, but also audio and video "deepfakes."Īfter the Stanford researchers alerted LinkedIn about the profiles, LinkedIn said it investigated and removed those that broke its policies, including rules against creating fake profiles or falsifying information. "That face tends to look trustworthy, because it's familiar, right? It looks like somebody we know," he said. Technology Deepfake video of Zelenskyy could be 'tip of the iceberg' in info war, experts warn
0 Comments
Leave a Reply. |