OpenAI announced on Tuesday that it would make its new deepfake detector available to a select group of misinformation researchers for testing in real-world scenarios and to identify areas for improvement. According to Sandhini Agarwal, an OpenAI researcher who specialises in safety and policy, “this is to kick-start new research.” “That is truly necessary.”
Also READ: Modi and Shah Vote Amidst Violence in Phase 3 of Lok Sabha Polls
OpenAI’s Deepfake Detector Targets DALL-E 3, But Leaves Gaps in Coverage
According to OpenAI, its new detector could accurately identify 98.8% of images produced with its most recent image generator, DALL-E 3. However, the programme was not intended to identify images generated by other well-known generators, such Midjourney and Stability, according to the business.
Also READ: Alia Bhatt made her second appearance at the Met Gala
This type of deepfake detection can never be flawless because it is based on probability. Consequently, OpenAI is attempting to tackle the issue in different ways, much like a lot of other businesses, charities, and university labs.
Also READ: चीन में चाकूबाजी की वारदातें: अस्पताल हमले में दर्जनों की मौत
OpenAI Joins C2PA Steering Committee to Establish Standards for Content Authenticity
The business is joining the steering group of the Coalition for Content Provenance and Authenticity, or C2PA, an initiative to provide credentials for digital content, along with industry heavyweights Google and Meta. A sort of “nutrition label” for photos, movies, audio snippets, and other data, the C2PA standard indicates when and how the media were created or modified, even using artificial intelligence (AI).
Also READ: Lalu Prasad Yadav’s ‘poora Muslim quota’ remark triggers row, PM Narendra Modi reacts
OpenAI Spearheads Effort to Watermark AI-Generated Sounds for Real-Time Identification
Additionally, OpenAI announced that it was working on methods to “watermark” AI-generated sounds so that listeners could quickly recognise them. The goal of the company is to make it challenging to erase these watermarks. The AI sector, led by businesses like OpenAI, Google, and Meta, is under growing pressure to take responsibility for the material that its products produce. Experts are urging the sector to stop users from creating harmful and deceptive content and to provide methods for tracking down its source and propagation.
Also READ: ED Raids Uncover Rs 25 Crore in Unreported Cash in Ranchi
Urgent Calls Intensify for AI Content Lineage Monitoring Amid Global Election Surge
There are increasingly urgent need for methods to keep an eye on the provenance of AI information in a year filled with significant global elections. Voting and political campaigns have already been impact by sound and images in recent months in Slovakia, Taiwan, and India, among other locations.
Also READ: बीएमडब्ल्यू कार से महिला का पीछा करने के वायरल डैशकैम वीडियो के बाद पुलिस हरकत में
More Stories
U.S. 2024 Election: Voters to cast presidential ballots soon
कनाडा में हिंदू मंदिर पर खालिस्तानियों के हमले से नाराजगी, विदेश मंत्री बोले- ये बेहद चिंताजनक
सलमान खान को बिश्नोई गैंग की धमकी: मंदिर जाओ या 5 करोड़ दो