As child protection and the general welfare of societies continue to express concerns over the use of social media amongst children, several platforms have been putting in measures that enable parents to govern the amount of time their children spend on the screen. A flurry of new laws has given parents more control over their children’s interactions on social media including Instagram, TikTok, YouTube, and the like. Such applications’ purpose is twofold: to support young users in their safe navigation through the online environment while trying to protect them from obscene materials, bullying, and sexual predators.

It can be noted that social networks have started implementing various options that would help in having a safer network with parental controls on it. For example, Instagram offers “Family Center” while TikTok has “Family Pairing” which allows parents to monitor activity, set a time limit for usage, hide content, and limit the choice of direct messaging options. This ability of parents to monitor their children’s activities online has been on the rise, especially with the pressure from governments and Child Advocacy groups who have called on technology firms to put the interest of minors first. In light of current research, 75% of parents expressed their concern that safer filters give them more confidence in their children using social networks.

New Restrictions Aimed at Safeguarding Children’s Online Experience

Generative AI: A whole school approach to safeguarding children | News from the Council of International Schools

The latter are not only concerned with parental control features. Social media platforms have also started using age requirements and content censoring as further protection for children. For example, YouTube has created its special application called YouTube Kids which shows only programs that are safe for children and at the same time removes any video that might be dangerous or contain something that children should not see.

Besides filtering content, all of the sites have introduced the option of age verification when creating personal profiles. Instagram and TikTok in particular have more rigorous policies of age verification to ensure that people below the age of 13 years do not create an account without permission from their parents. These platforms also employ algorithms that identify and eliminate undesirable content more effectively; it also contributes to the fact that young users of the platforms are shielded from possible dangerous or obscene things.

According to a 2010 survey which was carried out among parents, 68% of them supported these changes with most of them regarding them as unavoidable in the fight against child predation on social networking sites. However, some parents have the opinion that these tools are still inadequate and that much more needs to be done at the international level, and also involving governments, producers of hi-tech products and parents to come up with better solutions to the issue of safety in the use of the new technologies.

All in all, the heightened level of restriction exhibited by social networks is a progressive change that will benefit users below the age of majority. Some of the steps that have been undertaken in the past have received some support from most parents, however, the need to continue coming up with more innovations in this area will be known as necessary, especially with the ever-changing nature of the digital platform.

Leave a Reply

Your email address will not be published. Required fields are marked *