In an age where misinformation can spread like wildfire across social media platforms, these platforms must implement effective policies to curb the spread of false information. In this blog post, we will examine efforts made by Instagram and TikTok to combat misinformation and online harm.

Photo Credit: ajay_suresh/flikr
Instagram:
Instagram, one of the leading social media platforms globally, has taken several steps to tackle misinformation on its platform. One of its key strategies is leveraging fact-checking partnerships to identify and label false information. Instagram partners with third-party fact-checkers who review and rate the accuracy of content flagged by users or algorithms. If content is deemed false, it is labeled as such, and its distribution is reduced on the platform.
Meta introduced “Fact-Checked Control” on Instagram quietly, enabling users to adjust fact-checked content visibility. Users can choose to reduce, reduce more, or not reduce visibility. Despite Meta’s claim of addressing user feedback, the rollout has faced backlash, especially from pro-Palestinian accounts, accusing Instagram of default content censorship. Instructions for changing the setting have circulated widely. Instagram is criticized by pro-Palestinian activists for its handling of Israel-Hamas conflict posts. Concerns persist about stifling pro-Palestinian voices. Meta plans to extend fact-checking to Threads, its X competitor, next year. Currently, Threads content cannot be rated. Meta employs independent fact-checkers to review and rate posts, remove violating content, and reduce the distribution of false content.

Image Captured from Consumer Reports Organization
Instagram has implemented measures to limit the reach of content containing misinformation, such as reducing its appearance in users’ feeds and Explore pages. This aims to mitigate the virality of false information and prevent it from reaching a wider audience. Additionally, Instagram provides users with tools to report misinformation, ensuring that the community plays an active role in identifying and flagging false content. This crowdsourced approach enables Instagram to address misinformation once it’s reported more quickly.

Photo Credit: Eman235 (talk | contribs)
Examples:
- Instagram labels posts containing misinformation with warnings, informing users about the inaccuracies in the content.
- Content flagged as false by fact-checkers is demoted in users’ feeds and may be accompanied by educational pop-ups debunking the misinformation.
- Users can report misinformation through the platform’s reporting tools, prompting Instagram to review and take necessary action.
Evaluation: Instagram’s efforts to combat misinformation are commendable, but their effectiveness varies. While fact-checking and content demotion are valuable steps, misinformation can still evade detection or spread rapidly before being flagged. Additionally, Instagram’s reliance on third-party fact-checkers may cause delays in addressing misinformation, allowing false content to circulate unchecked for a period of time.
Suggestions for Improvement: To enhance its misinformation mitigation efforts, Instagram could consider:
- Utilizing and investing in artificial intelligence as well as machine learning algorithms to detect and flag misinformation.
- Providing greater transparency around its fact-checking process, including details about fact-checking methods and results.
- Collaborating with academic institutions and research organizations to develop innovative approaches to combating misinformation. Qualitative studies can also benefit the platform with methods like interviews, focus groups, content analysis, and ethnographic observations. These approaches can help fact-checkers tailor their strategies to be more effective.
TikTok:

Photo Credit: Nordskov Media
TikTok has also taken steps to address misinformation and online harm within its community. Like Instagram, TikTok collaborates with fact-checking organizations to identify and label false information on its platform. TikTok has implemented measures to promote authoritative sources and credible information. This includes partnering with fact-checking organizations to verify content and label misinformation. Verified accounts from trusted organizations and experts are promoted to help users identify credible sources. Additionally, TikTok provides educational resources to help users navigate misinformation and adjusts its algorithm to prioritize content from authoritative sources.
A recent study conducted a preregistered survey experiment with 1,169 U.S. American participants, presenting TikTok-style videos on six misinformation topics. Participants were randomly assigned to three conditions: misinformation-only, correction-only, or debunking (misinformation followed by correction). Results showed a marginal improvement in distinguishing between subsequent true and false videos in the debunking condition. Belief in misinformation was significantly lower in the correction conditions compared to the misinformation-only condition, particularly when corrections followed misinformation. The study highlights the effectiveness of correction videos on TikTok and emphasizes the importance of debunking misinformation on the platform.
During critical events or periods of heightened misinformation, TikTok surfaces content from trusted sources, such as government agencies or reputable news outlets, to provide users with accurate information. Despite concerns about TikTok’s ties to the Chinese government, misinformation, and its impact on teenagers, the platform’s popularity continues to soar. New Public highlights instances of misinformation on TikTok, emphasizing the need for platforms to be more transparent about their efforts to combat falsehoods. Overall, TikTok appears to prioritize community guidelines and anti-misinformation policies, empowering users to flag false content and abusive conduct. By fostering a culture of accountability, TikTok strives to uphold a safe and trustworthy environment for its user community.
Examples:
- TikTok labels content identified as misinformation with warnings, directing users to credible sources for accurate information.
- During crises or emergencies, TikTok elevates content from authoritative sources to ensure users receive reliable updates.
- TikTok’s community guidelines prohibit misinformation and encourage users to report violations, enabling the platform to take action against offenders.
Evaluation: TikTok’s efforts to combat misinformation demonstrate a proactive approach to safeguarding its community. By leveraging fact-checking partnerships and promoting credible sources, TikTok aims to prevent the dissemination of false information and foster a trustworthy environment for its users. However, like Instagram, the effectiveness of TikTok’s policies may vary, and there’s always room for improvement.
Suggestions for Improvement: To further enhance its misinformation mitigation efforts, TikTok could consider:
- Investing in user education initiatives to enhance media literacy and critical thinking skills among its community members.
- Implementing more stringent measures to deter the creation and sharing of false information, such as temporary suspensions for repeat offenders.
- Collaborating with experts in psychology and behavioral science to understand and address the underlying factors contributing to the spread of misinformation on the platform.
The effectiveness of policies implemented by platforms like Instagram and TikTok in combating misinformation can vary. Some measures, such as fact-checking partnerships and content moderation algorithms, have shown promise in reducing the spread of false information. However, the constantly evolving nature of online misinformation poses challenges, and not all policies may effectively address emerging threats.
One common issue with existing policies is a lack of transparency. Users often don’t fully understand how content moderation decisions are made or why certain posts are flagged as false. Increasing transparency by providing clear explanations for moderation actions could help build trust and credibility among users.
Additionally, while platforms have made efforts to combat misinformation, there may be gaps in coverage or enforcement. For example, certain types of misinformation, such as deepfakes or manipulated media, may not be adequately addressed by current policies.
Platforms could improve their policies by considering existing insights, even without direct access to research or observations. The effectiveness of policies against misinformation varies on platforms like Instagram and TikTok.
While some methods like fact-checking partnerships and content moderation algorithms seem promising, the ever-changing landscape of online misinformation poses challenges. A key issue is the lack of transparency in policy enforcement, making it difficult for users to understand content moderation decisions. Additionally, certain types of misinformation, such as deepfakes, may not be adequately addressed. To enhance policy effectiveness, platforms should focus on transparency, expand fact-checking efforts, tackle emerging threats, empower users with critical thinking skills, and collaborate with researchers for continuous improvement. These actions can strengthen platforms’ efforts to combat misinformation and create a safer online space.
