Yesterday TikTok introduced me with what gave the impression to be a deepfake of Timothee Chalamet sitting in Leonardo Dicaprio’s lap and sure, I did instantly assume “if this stupid video is that good imagine how bad the election misinformation will be.” OpenAI has, by necessity, been desirous about the identical factor and at the moment up to date its insurance policies to start to handle the difficulty.
The Wall Road Journal famous the brand new change in coverage which had been first revealed to OpenAI’s weblog. ChatGPT, Dall-e, and different OpenAI software customers and makers are actually forbidden from utilizing OpenAI’s instruments to impersonate candidates or native governments and customers can’t use OpenAI’s instruments for campaigns or lobbying both. Customers are additionally not permitted to make use of OpenAI instruments to discourage voting or misrepresent the voting course of.
The digital credential system would encode photos with their provenance, successfully making it a lot simpler to determine artificially generated picture with out having to search for bizarre palms or exceptionally swag matches.
OpenAI’s instruments can even start directing voting questions in america to CanIVote.org, which tends to be among the finest authorities on the web for the place and vote within the U.S.
However all these instruments are at the moment solely within the strategy of being rolled out, and closely depending on customers reporting dangerous actors. Provided that AI is itself a quickly altering software that frequently surprises us with fantastic poetry and outright lies it’s not clear how nicely this can work to fight misinformation within the election season. For now your finest guess will proceed to be embracing media literacy. Which means questioning each piece of stories or picture that appears too good to be true and at the very least doing a fast Google search in case your ChatGPT one turns up one thing completely wild.