Visualistan: Safety -->

    Social Items



Instagram has consistently worked on improving the in-app experience for its teen users, ensuring a safe social space for them to engage with others. Recently, Instagram has launched new content recommendation controls that reduce teen users’ exposure to sensitive content, such as involving the topic of self-harm.

 


Instagram has taken into consideration the potential negative impact of a complex topic like self-harm on teens’ mental health. “We will start to remove this type of content from teens’ experiences on Instagram and Facebook, as well as other types of age-inappropriate content,” says the social media company. Currently, there is already limited recommendations of self-harm related content within Reels and Explore. Moving forward, these restrictions will also be applied to Feed and Stories, even including content from accounts that teen users follow.

 

Linked with this addition is another important safety restriction that would inform Instagram if a teen user searched for terms similar to suicide, self-harm, and eating disorders. This would enable Instagram to redirect the user to official help services when they search for such terms. Additionally, specific search results that are detected as potentially triggering will be entirely hidden from the users.

Instagram is Rolling Out New Protective Measures against Sensitive Content for Teen Users

The Ultimate Guide to Pins Awarded to Emergency Services Personnel

What are all of those colorful bars, badges, medals and more that you commonly see on the uniforms of emergency medical personnel like law enforcement, firefighters, and EMS? The team at Wizard Pins is here to answer those questions with this fascinating infographic that comes to you as the ultimate guide to pins awarded to emergency services personnel.

The Ultimate Guide to Pins Awarded to Emergency Services Personnel #Infographic



Meta has expanded its measures for the protection of underage users on Facebook Dating in the US. This means that users aged under 18 years will not be allowed to sign up for and access Facebook Dating.

 

This age verification tool, however, is not a regular one. It is powered by digital identity company Yoti, with whom Meta has been working since June of this year. Yoti’s systems are equipped with training on a large amount of global data of anonymous images belonging to a diverse range of people. As a result, the company can almost accurately determine the ages of people using video selfies.

 

According to Meta, it will launch the tool across its apps including Facebook, Instagram and Facebook Dating. During the test of the tool on Instagram, Meta discovered that about four times as many people were more likely to complete its age verification requirements as their attempts to edit their age from under 18 to over 18. This equated to “hundreds of thousands of people being placed in experiences appropriate for their age.”

Meta Expands its ID-Based Age Verification Tool to Facebook Dating in the US



Meta has announced new privacy updates for better protection of people aged below 16 years on Facebook. These include stricter privacy controls that allow them to limit what people view on their profiles, hide tagged posts, as well as limit comments from non-friends.

 

Meta had earlier launched similar control settings on Instagram as well, where users aged below 16 years had their accounts automatically set to private.

 

Although these settings would practically not result in complete safety, they could still facilitate younger users in choosing who can interact with them, hence giving them more authority in managing their profiles and ensuring awareness for self-protection.

 

Additionally, Meta is testing another safety option that limits children’s exposure to adult-owned accounts that appear suspicious. Meta defines a ‘suspicious account’ as one that “may have recently been blocked or reported by a young person, for example.” These accounts will be prohibited from sending messages to younger users and will also be removed from their recommendations.

Meta is Ensuring Improved Protection of Young Users from Potential Exploitation Via New Safety Tools on Facebook



LinkedIn has announced new tools and insights that target fraudulent and scam content in the app. The new insights will provide useful information on LinkedIn accounts, a new feature is dedicated to detecting AI-generated profile images, and new prompts alert users about scam messages.

 

Profile insights reveal when a profile was created, last updated, and whether the user has a registered email or phone number in the app. This ‘About this profile’ feature is accessible from the three dots menu displayed on a profile and the information from it could help in detecting potential scammer accounts. The addition of this tool is an important one, especially because millions of fraudulent profiles have been recently identified on LinkedIn by MIT Technology. The fraudulent  accounts have been caught luring users into crypto investment scams in particular.

 

LinkedIn Announces New Tools Dedicated to Better Security for Users



Meta is focused on pushing more commercial activity on its platforms, and with that, the company is also ensuring more safety for brands by rolling out tools that better detect scammers and counterfeits.

 

The Business Manager app that Meta launched last year in March is created to allow brands to upload images of their licensed products, so Meta can detect any potential violation activities by using the uploaded images as reference for detection of similar matches.

 


Meta is bringing improved alert recommendations to the app, based on expanded detection, as well as the ability for rights holders to upload a list of Facebook Pages and Instagram accounts that are authorized to use product images.

 

Meta is also giving brands the ability to report potentially infringing ads, Facebook Pages and Instagram accounts used for impersonation, or for counterfeit, trademark or copyright infringement. “This update will improve our ability to proactively detect and remove impersonating content,” says Meta.

 

Meta Updates Business Manager App for Improved Detection of Scam and Infringement

PPE 101: Your Guide To Personal Protective Equipment

Personal Protective Equipment doesn’t start and end with protection against viral outbreaks. All types of workers from different sectors of industry require PPE. 


When you work on a construction site, you require a hard hat to make sure you protect yourself from falling objects. When you work in a kitchen, you need aprons to protect both yourself and the food you prepare. Those working with brick dust will require ventilation equipment, dust masks and nose clips. Divers require breathing apparatus. Cooks require hair nets.

PPE 101: Your Guide To Personal Protective Equipment #Infographic



Instagram has made new additions to its Safety tools which include updated user blocking, expansion of Hidden Words, and new prompts to reduce offensive interactions.

 

The account blocking option on Instagram allows users to block unwanted accounts, as well as any subsequent accounts created by that user. Now, Instagram has made the option more advanced by providing the option to also block other accounts that a user may already have, hence further reducing the chances of interaction from the user.


Next, the Hidden Words tool that Instagram had previously launched is now being expanded to Creator Accounts, as well as Story replies. The option allows blocking of certain words within DMs and comments that users don’t wish to see on their profile. The selected words are then moved to filtered folders. Additionally, Instagram is making Hidden Words available in more languages as well.

Instagram Updates its Safety Features for Better In-App Experience



TikTok has announced an update to the minimum age requirement for hosting livestreams in the app, which is now 18 years. Previously, users were allowed to host Lives at a minimum age of 16. TikTok further adds that “younger teens need to be aged 16 or older to access Direct Messaging and 18 or older to send virtual gifts or access monetization features.”

 

The video-sharing platform has taken the age limit into consideration after it received criticism from the public for incentivizing young people on its app to share provocative and other potentially risky content for the sake of getting fame and making money.

TikTok Announces New Updates Including New Age Limits and Broadcasting Features



Twitter is working on a new audience control setting that allows users to manage who can mention them in the app. 


App researcher Jane Manchun Wong discovered the update displayed in the form of a toggle that indicates ‘Allow Others to Mention you.’ By turning it off, you would be able to limit mentions of your Twitter handle. Additionally, you would also be able to choose to only allow mentions from people who follow you.  

 


Twitter also offers another control setting related to mentions called ‘Unmention,’ using which you can leave chats that you don’t like and have your handle link deactivated in those threads as a result, so nobody can mention you within them.

Twitter is Reportedly Developing a Control Setting that Restricts Mentions



With Meta planning the future of the digital world via metaverse, people, especially younger ones, will not just get to experience a next-level immersive experience, but will also greatly and more frequently become subject to problems that commonly occur over social media, such as bullying, harassment, and inappropriate content. To prevent these problems, Meta is already ensuring to create a safe environment for people to interact amongst one another.

 

The company’s latest digital and virtual space, Horizon Worlds, is being updated with a new ‘mature audiences’ content rating process. Users are being prompted to set up content rating for their Horizon Worlds spaces. 

Meta Updates Horizon Worlds with a Content Rating Setting

Subscribe Our Newsletter