''Everything You Are Hearing and Seeing Can Be Fake” : Delhi HC on AI and Deeepfake

''Everything You Are Hearing and Seeing Can Be Fake” : Delhi HC on AI and Deeepfake

On Wednesday, the Delhi High Court recognized the prevalent use of deep fakes, stating, “While it’s true that sometimes deep fakes are used, not everything you see or hear is necessarily fake.”

The court made these observations in response to a plea filed by Advocate Chaitanya Rohilla, represented by Advocate Manohar Lal. The plea highlighted the urgent need for regulatory frameworks to manage emerging technologies due to their potentially broad and significant impact.

The bench of Acting Chief Justice Manmohan and Justice Tushar Rao Gedela remarked, “I am shocked that what I can see with my own eyes, and hear with my ears is fake”.

The petitioners also dismissed the Central government's responses, arguing that the three proposed solutions were merely advisory and lacked mandatory enforcement. They contended that AI-generated videos should be labeled as such, rather than using a watermark, to clearly indicate that the content was created by artificial intelligence.

The petitioners argued that the “Problem is once such a video is posted, the damage is done. Even if we complain to grievance officer, they take action within 72 hours, by then it is shared numerous times”. 

The court expressed skepticism, noting that individuals trying to present fake videos as genuine would likely not voluntarily include such labels. However, the petitioners clarified that, while individual users might not label the videos themselves, the platforms providing these services should be required to include such labels on all AI-generated content.

The petitioners further pointed out that many platforms already offer these labeling services at no cost. In response, the court questioned the feasibility of implementing such technology, noting that websites could be replicated and inquiring about the availability of this technology. The petitioners explained that while purchasing the technology might not be feasible, it could be rented or subscribed to on a per-video basis.

The petitioners emphasized that platforms should mandate labels for any AI-generated video, audio, or image, citing Argentina's practice of labeling AI-generated images as an example. They argued that without such measures, controlling the risks associated with AI would be challenging. However, the court raised concerns about jurisdiction, noting that many companies providing these services might be based outside the country.

Acknowledging the broader implications of the issue, the court directed the petitioners to submit detailed suggestions, including examples or methods employed by other countries, within three weeks. Additionally, the court instructed the Central Government to develop viable solutions. The matter has been scheduled for a hearing on October 24.

Case Title: Chaitanya Rohilla v. Union of India (W.P.(C)-15596/2023)

Share this News

Website designed, developed and maintained by webexy