Full width home advertisement

Welcome Home


Post Page Advertisement [Top]

Twitter has stated that it will remove images of people who have not given their consent

 



Twitter announced in a blog post on Tuesday that it has updated its privacy policies to allow it to remove images of people who have posted images of themselves without their consent.

 

The current policy of the social media behemoth prohibits the publication of people's private information, such as their addresses, phone numbers, identification documents, and medical records, among other things.

 

In addition to public media, it has now included "private media," claiming that the dissemination of such material could be used to "harass, intimidate, and reveal individuals' identities."

 

It was stated by the company that "sharing personal media, such as images or videos, may infringe on an individual's privacy and may result in emotional or physical harm to that individual."

 

Women, activists, dissidents, and members of minority communities, according to the report, "are disproportionately affected by the misuse of private media," which is a general statement.

 

According to Twitter, before removing an image or video, the company will require a first-person report or a report from an authorized representative in order to determine whether the individual consented to the image or video being shared.

 

As soon as Twitter discovered that the personal media had been shared without permission, the company announced that it would remove the content from its platform.

 

It went on to say that when the public interest or an emergency situation is at stake, the policy changes are not applicable.

 

When media containing public figures or individuals is shared in the public interest or adds value to public discourse, the policy does not apply, according to the company.

 

There are instances in which account holders share images or videos of private individuals in order to assist someone who is in a crisis situation, such as the aftermath of a violent event or as part of a newsworthy event of public interest, and that this may outweigh the safety risks to an individual, according to the company.

 

A number of users expressed concern about the new measures, which went into effect globally on Tuesday and were criticized for being implemented too quickly and potentially leading to excessive censorship.

 

Images and videos depicting public events such as mass protests and sporting events would typically not be considered to be in violation of the new policy, according to a series of tweets from the company later clarifying the changes.

 

"The importance of context cannot be overstated. There are numerous exceptions to our existing policy on private information that allow for thorough coverage of newsworthy events and conversations that are in the public interest "According to the organization.

 

It also said that it would consider whether the image is publicly available and/or covered by journalists — or whether the image and accompanying Tweet text add value to public discourse — is shared in the public interest or is relevant to a community when deciding whether to use it.

 

Some users, on the other hand, continued to express their dissatisfaction with the policy updates' ambiguity.

 

"According to Twitter's @Policy, privacy and data protection laws generally protect public interest reporting in a variety of ways. When it comes to your new policies, how do you intend to maintain the same level of balance that you have now?" Several users took to Twitter.

 

"Consequently, does Twitter's new policy effectively forbid candid street photography on the platform? Despite their good intentions, blanket removal policies have a significant flaw, as demonstrated by this case "Another tweet has been sent.

 

Twitter's move comes at a time when social media companies are being scrutinized more than ever for how well they protect their users' data and personal information.

 

Following revelations that the social media platform may have a negative impact on children under the age of 13, Instagram announced in September that it was suspending plans to develop a version of its product aimed specifically at children under the age of thirteen.

 

In a similar vein, Meta, the parent company of Facebook and Instagram, announced plans in November to limit advertisers' ability to target users based on sensitive categories such as religion and race.

No comments:

Post a Comment

Bottom Ad [Post Page]