Using generative AI to fabricate images, videos, documents, and conversation records has become incredibly convenient. This renders the previous notion of 'seeing is believing' increasingly invalid.
Recently, several people in my circle received various threats in the form of 'small videos' sent by attackers. Instead of being threatened, they posted them as jokes because everyone already knows about the capabilities of Deepfake.
I have made some AI implementations and found that giving an Agent a very specific persona can enable it to engage in various styles of conversation and storytelling. It can simulate two people texting each other, simulate two people emailing each other, and even simulate interactions on Twitter. These interactions are so realistic and hard to distinguish between true and false.
Another similar issue is that if the AI knowledge base on which one relies is tampered with by attackers, it may also pose a challenge to future consciousness. Sometimes we rely too much on notes and knowledge bases. sometimes we only realize the extent of the deviation between our memory and imagination by reviewing our notes (our memory can deceive us). But how terrifying would it be if your notes could be modified by AI to distort the truth?
The verifiability based on digital signature technology is unprecedentedly important. From a certain perspective, blockchain, DID/VC, and other technologies seem to come from the future. If it weren't for the proliferation of generative AI, perhaps the value of these technologies would not have been reflected so significantly.