Visualistan: Safety -->

    Social Items



Instagram has consistently worked on improving the in-app experience for its teen users, ensuring a safe social space for them to engage with others. Recently, Instagram has launched new content recommendation controls that reduce teen users’ exposure to sensitive content, such as involving the topic of self-harm.

 


Instagram has taken into consideration the potential negative impact of a complex topic like self-harm on teens’ mental health. “We will start to remove this type of content from teens’ experiences on Instagram and Facebook, as well as other types of age-inappropriate content,” says the social media company. Currently, there is already limited recommendations of self-harm related content within Reels and Explore. Moving forward, these restrictions will also be applied to Feed and Stories, even including content from accounts that teen users follow.

 

Linked with this addition is another important safety restriction that would inform Instagram if a teen user searched for terms similar to suicide, self-harm, and eating disorders. This would enable Instagram to redirect the user to official help services when they search for such terms. Additionally, specific search results that are detected as potentially triggering will be entirely hidden from the users.

Instagram is Rolling Out New Protective Measures against Sensitive Content for Teen Users

The Ultimate Guide to Pins Awarded to Emergency Services Personnel

What are all of those colorful bars, badges, medals and more that you commonly see on the uniforms of emergency medical personnel like law enforcement, firefighters, and EMS? The team at Wizard Pins is here to answer those questions with this fascinating infographic that comes to you as the ultimate guide to pins awarded to emergency services personnel.

The Ultimate Guide to Pins Awarded to Emergency Services Personnel #Infographic



Meta has expanded its measures for the protection of underage users on Facebook Dating in the US. This means that users aged under 18 years will not be allowed to sign up for and access Facebook Dating.

 

This age verification tool, however, is not a regular one. It is powered by digital identity company Yoti, with whom Meta has been working since June of this year. Yoti’s systems are equipped with training on a large amount of global data of anonymous images belonging to a diverse range of people. As a result, the company can almost accurately determine the ages of people using video selfies.

 

According to Meta, it will launch the tool across its apps including Facebook, Instagram and Facebook Dating. During the test of the tool on Instagram, Meta discovered that about four times as many people were more likely to complete its age verification requirements as their attempts to edit their age from under 18 to over 18. This equated to “hundreds of thousands of people being placed in experiences appropriate for their age.”

Meta Expands its ID-Based Age Verification Tool to Facebook Dating in the US



Meta has announced new privacy updates for better protection of people aged below 16 years on Facebook. These include stricter privacy controls that allow them to limit what people view on their profiles, hide tagged posts, as well as limit comments from non-friends.

 

Meta had earlier launched similar control settings on Instagram as well, where users aged below 16 years had their accounts automatically set to private.

 

Although these settings would practically not result in complete safety, they could still facilitate younger users in choosing who can interact with them, hence giving them more authority in managing their profiles and ensuring awareness for self-protection.

 

Additionally, Meta is testing another safety option that limits children’s exposure to adult-owned accounts that appear suspicious. Meta defines a ‘suspicious account’ as one that “may have recently been blocked or reported by a young person, for example.” These accounts will be prohibited from sending messages to younger users and will also be removed from their recommendations.

Meta is Ensuring Improved Protection of Young Users from Potential Exploitation Via New Safety Tools on Facebook



LinkedIn has announced new tools and insights that target fraudulent and scam content in the app. The new insights will provide useful information on LinkedIn accounts, a new feature is dedicated to detecting AI-generated profile images, and new prompts alert users about scam messages.

 

Profile insights reveal when a profile was created, last updated, and whether the user has a registered email or phone number in the app. This ‘About this profile’ feature is accessible from the three dots menu displayed on a profile and the information from it could help in detecting potential scammer accounts. The addition of this tool is an important one, especially because millions of fraudulent profiles have been recently identified on LinkedIn by MIT Technology. The fraudulent  accounts have been caught luring users into crypto investment scams in particular.

 

LinkedIn Announces New Tools Dedicated to Better Security for Users



Meta is focused on pushing more commercial activity on its platforms, and with that, the company is also ensuring more safety for brands by rolling out tools that better detect scammers and counterfeits.

 

The Business Manager app that Meta launched last year in March is created to allow brands to upload images of their licensed products, so Meta can detect any potential violation activities by using the uploaded images as reference for detection of similar matches.

 


Meta is bringing improved alert recommendations to the app, based on expanded detection, as well as the ability for rights holders to upload a list of Facebook Pages and Instagram accounts that are authorized to use product images.

 

Meta is also giving brands the ability to report potentially infringing ads, Facebook Pages and Instagram accounts used for impersonation, or for counterfeit, trademark or copyright infringement. “This update will improve our ability to proactively detect and remove impersonating content,” says Meta.

 

Meta Updates Business Manager App for Improved Detection of Scam and Infringement

PPE 101: Your Guide To Personal Protective Equipment

Personal Protective Equipment doesn’t start and end with protection against viral outbreaks. All types of workers from different sectors of industry require PPE. 


When you work on a construction site, you require a hard hat to make sure you protect yourself from falling objects. When you work in a kitchen, you need aprons to protect both yourself and the food you prepare. Those working with brick dust will require ventilation equipment, dust masks and nose clips. Divers require breathing apparatus. Cooks require hair nets.

PPE 101: Your Guide To Personal Protective Equipment #Infographic



Instagram has made new additions to its Safety tools which include updated user blocking, expansion of Hidden Words, and new prompts to reduce offensive interactions.

 

The account blocking option on Instagram allows users to block unwanted accounts, as well as any subsequent accounts created by that user. Now, Instagram has made the option more advanced by providing the option to also block other accounts that a user may already have, hence further reducing the chances of interaction from the user.


Next, the Hidden Words tool that Instagram had previously launched is now being expanded to Creator Accounts, as well as Story replies. The option allows blocking of certain words within DMs and comments that users don’t wish to see on their profile. The selected words are then moved to filtered folders. Additionally, Instagram is making Hidden Words available in more languages as well.

Instagram Updates its Safety Features for Better In-App Experience



TikTok has announced an update to the minimum age requirement for hosting livestreams in the app, which is now 18 years. Previously, users were allowed to host Lives at a minimum age of 16. TikTok further adds that “younger teens need to be aged 16 or older to access Direct Messaging and 18 or older to send virtual gifts or access monetization features.”

 

The video-sharing platform has taken the age limit into consideration after it received criticism from the public for incentivizing young people on its app to share provocative and other potentially risky content for the sake of getting fame and making money.

TikTok Announces New Updates Including New Age Limits and Broadcasting Features



Twitter is working on a new audience control setting that allows users to manage who can mention them in the app. 


App researcher Jane Manchun Wong discovered the update displayed in the form of a toggle that indicates ‘Allow Others to Mention you.’ By turning it off, you would be able to limit mentions of your Twitter handle. Additionally, you would also be able to choose to only allow mentions from people who follow you.  

 


Twitter also offers another control setting related to mentions called ‘Unmention,’ using which you can leave chats that you don’t like and have your handle link deactivated in those threads as a result, so nobody can mention you within them.

Twitter is Reportedly Developing a Control Setting that Restricts Mentions



With Meta planning the future of the digital world via metaverse, people, especially younger ones, will not just get to experience a next-level immersive experience, but will also greatly and more frequently become subject to problems that commonly occur over social media, such as bullying, harassment, and inappropriate content. To prevent these problems, Meta is already ensuring to create a safe environment for people to interact amongst one another.

 

The company’s latest digital and virtual space, Horizon Worlds, is being updated with a new ‘mature audiences’ content rating process. Users are being prompted to set up content rating for their Horizon Worlds spaces. 

Meta Updates Horizon Worlds with a Content Rating Setting



Meta has taken a new initiative towards the safety of younger audience with regard to unsafe, intimate activities that teen users could be engaging in over social media. The company has released a guide on how parents could have discussions about these practices with their teen children and offer them their advice.

 

One of the key elements of discussion that Meta has focused on is telling teens that sending or receiving intimate images isn’t something that everyone does, which suggests being cautious when receiving inappropriate images.

Meta’s Latest Guide is Helping Parents Have Conversation with their Teen Children About Exchange of Intimate Content on Social Media



To ensure better protection of young users from unwanted content on its app, TikTok is launching new safety filters and options.

 

Users can block specific hashtags, as well as content containing specific key terms within the description, using the ‘Details’ tab in settings. However, the system will only detect manually added description notes, hence leaving some room for errors.

 


Moreover, TikTok is expanding its limits on content exposure related to potentially harmful topics, that it had launched tests of last December. The system works by limiting the number of videos in sensitive categories like dieting, extreme fitness, sadness, etc. in the ‘For You’ Feed.

 

TikTok Introduces New Safety Tools for Young Users

Parents Still Have the Power to Protect their Children

Since the pandemic began in 2020, many things about our lives have changed. Now that the scares from COVID seem to be mostly behind us, some changes have stuck with us. Some of these have been for the better, however some have not.

Parents Still Have the Power to Protect their Children #Infographic

Get Paid To Be Safe: Gun Liability Insurance

In America, tragically, fatality by accidental gun injury is 400% more likely than in other competing nations. Gun safes are the key and answer to the prevention of these inadvertent tragedies. Investing in them, and other gun safety practices, can also put money back in your pocket.

Get Paid To Be Safe: Gun Liability Insurance #Infographic

 

Twitter had been testing a new Safety Mode feature and is making it available to more users after collecting sufficient feedback through the beta test. The feature has been designed using automated system detection and can be used for autoblocking potentially problematic accounts.


 

Safety Mode can be accessed from the Privacy and Safety settings in the app. The autoblocking functionality will be activated for 7 days to target accounts that have been engaging in unwanted interaction with you, such as sending repetitive replies or @mentions your way.

Twitter Rolls Out its New Safety Mode to More Users

 


In celebration of ‘Safer Internet Day,’ Instagram has launched two major updates: a new Your Activity display and an expanded access to its Security Check-up tool.

 

The Your Activity display is designed to improve content management. It includes a complete overview of a user’s activity on the app, such as time spent in the app, interactions with others, searches conducted, and all the uploaded content. It also has a bulk delete option, which makes it easier to remove posts and clips.

 


As for the Security Checkup feature, it is designed to guide users through the steps needed to secure their account. The steps include “checking login activity, reviewing profile information, confirming the accounts that share login information and updating account recovery contact information such as a phone number or email address,” according to Instagram.

 


The Security Checkup tool was originally launched in July last year, particularly for users whose accounts had been hacked. Instagram is now introducing the tool to all users, so as to provide them with enhanced security control options.

 

Moreover, Instagram is making two-factor authentication using WhatsApp available to users in some countries, which also serves as an advanced security tool. The company is also testing an additional security tool that helps users regain access to their locked accounts. This will include a process that uses help from friends to confirm users’ identity in the app.

 


Instagram is certainly paying attention to improving safety and security across its platform with the addition of enhanced control and security tools, hence making users’ in-app experience better.

Instagram Improves In-App Security with New Updates on Safer Internet Day

 


Instagram announced a new safety option on its platform, which will require users to share their date of birth. The company means to "create safer, more private experiences for young people" with this feature.

 

Reportedly, you will be asked to enter your date of birth once you open the app. If you do not provide the information, you will be prompted several times for it.

 

Additionally, you will also see warning screens on certain posts where you will have to enter your birth date in order to view the content in the post. The interface of this will be similar for warning screens for sensitive or graphic content.

Instagram Will Now Ask Users to Provide their Date of Birth

 


Later this year, Apple plans to release new software updates, and the company just announced a preview of new child safety features that will be part of the updates.

 


The preview includes a Communication Safety feature, Photo Scanning for Child Sexual Abuse Material, and an Expanded Child Sexual Abuse Material Guidance in Siri and Search.

 

With the new Communication Safety feature, children and parents will be warned at the time of receiving or sending sexually explicit photos. Such photos will be identified through on-device machine learning, which will also blur the images. For instance, if a child receives a sexually explicit photo in the Messages app, an alert will notify them that the photo may contain private body parts and may be hurtful. This feature will be part of iOS 15, iPadOS 15 and macOS Monterey for accounts set up as families in iCloud, according to Apple.

 

Apple Shares a Preview of New Child Safety Features for Upcoming Software Updates

Subscribe Our Newsletter