Anon Post Regarding AI To Post “Reanimating The Dead”


<<Meta/Facebook reanimating the dead.
Anonymous (ID: F3ot6zkY)
No.417337998I’m a Meta insider working on Project Lazarus. We’re building an AI that can take over a deceased persons social media accounts and continue making relevant posts as if that person is still alive. This includes age progressed photos, interacting with other peoples content and everything else needed so that person continues on in the digital realm after physical death. We were originally told this would be a service offered to people struggling with the loss of loved ones and people who had missing children. Seemed like a decent idea. Things are getting weird now and I’m having second thoughts about what this is actually going to be used for. The AI is extremely capable of impersonating people. It doesn’t take as much initial input as one might think to train the AI how a certain person interacts with the digital world. It’s very convincing. An entire island of people could go missing and with little to no downtime the AI could take over all of their social media and the world wouldn’t have a clue that life wasn’t just continuing as usual. A lot of the project is becoming more compartmentalized. Things have taken a dark turn it feels like. They’ve forbidden communication between people working on different things. Something isn’t right and I don’t know what I should do. I’m not going to post any personally identifiable information but I will try to answer questions that won’t expose my role within the project.

Q. Can you quit?
We’ve all signed NDAs but I could walk away anytime as far as I know. Two people were transferred off the project. I suspect it was for asking too many questions about things that weren’t relevant ti their specific job.

Q. just a heads up there has been a ton of people making threads here pretending to be a whistleblower/insider lately so you should expect a lot of aggression from people. i know you dont want to identify yourself but there has to be something that you can show us that will add validity to your claim without compromising yourself

> They’ve forbidden communication between people working on different things.

how do they prevent any of you from communicating outside of work?

I don’t know what I could post as proof that wouldn’t narrow my position down to about 8 people. They brought around more NDAs in January having us sign that we would not discuss the project with anyone else outside of work and in the workplace we’re only allowed to communicate with people in our cell. Usually 4-10 people. I’m not willing to put my family at risk financially or physically over any of this so providing proof isn’t a high priority for me. I just made the post so people can be aware that this exists. Believe what you will.

-We used to get bi-weekly updates from the live deployment team. They’re responsible for running real-world tests on a very small scale to see how well the AI reproduces someone digitally. They would give reports and we could make adjustments to things as needed. About six months ago we stopped getting those reports and we’re left completely in the dark about what live deployments are taking place now. We’re not allowed to even discuss it now according to the new NDA. I have no clue what it’s being used for as of now but I can say it’s running continuously. Massive amounts of data. They’ve been building out the server farm very fast to handle the load of whatever operations they’re using it for. When we stopped being told how it was being used is when I started to feel like something isn’t right. I don’t like not knowing what we’re doing with the technology. If it’s being used in a positive way it seems like they wouldn’t be hiding the live testing information.

-The AI is truly remarkable. It’s able to study an individual, create and store an entire lifetime of fictional holidays, meals, relationships and anything else that individual would be expected to post. This data is all stored and deployed by the AI on a schedule that makes sense for that individual. There are some tells that insiders can see that give away if it’s the AI or a real person. Some of our work has been to eliminate those tells to make the AI as convincing as possible even to the people who are closest to the subject. We’ve done really great work but there’s still some things that exist that the AI doesn’t get right. I can probably post an example or two. I guess that could be considered proof and those are well known enough that it would be difficult to trace to me.

-One of the initial live tests were to create 50 fictitious people. The goal was to make these 50 people new hires at Meta and have them interact and build online relationships with other employees who were completely unaware that these 50 people didn’t actually exist. By adding them as friends and giving a few pieces of starter content it was easy to get the ball rolling for other employees outside of our project to start adding the AI generated people as friends also and start interacting with them. These 50 generated accounts were not based off of real individuals so the results were wild. The AI came up with stuff that ranged from absolutely hilarious to completely horrifying. I guess that’s pretty accurate for the human experience. That’s more along the lines of a traditional bot but it goes to a much deeper level than a bot just reposting the current talking point. It’s when we have it live data on real individuals that things truly took off. It’s astoundingly accurate with reproducing real individuals.

-I don’t have time this evening to continue. I’ll make another post this weekend and try to explain more for anyone interested. SESJJST>>