Insights

Safety Sells: How Building Trust Creates Products That Users Stick With

Written by Kristen Haddad | Feb 6, 2025 5:00:00 PM

Written by Kristen Haddad, Senior UX Researcher, with Insights from Ali Smith, Jennifer Olivier, Julia Strunk, and Max Symuleski.

>> Short on time? Jump to our list of actionable steps and practical strategies for product teams.

Imagine this: Mona, a content moderator, sits down with her coffee and logs into the system. It's a routine she knows well, sorting through a mountain of flagged content. Each day brings its own challenges, including hate speech disguised as free speech, disturbing videos that test her resilience, and the occasional debate over whether tomatoes are fruits or vegetables. It’s just another typical day online.

But today, something catches her eye. A post boldly declares, “Pineapple doesn’t belong on pizza - fight me.” Mona smiles at the classic internet argument. However, the system flagged the post for promoting violence. The post was removed, and the user received a warning for breaking community guidelines.

Mona can imagine the user's confusion. They weren’t looking for a fight, just some playful banter.

This scenario highlights the central challenge of Trust and Safety (T&S). Platforms must protect users and ensure safety without creating fear of censorship or misunderstanding. It's a delicate balance between fairness and control, one that T&S teams navigate every day.

What Is Trust & Safety?

At its core, T&S isn't just about preventing harm. It's about creating spaces where people feel safe, supported, and free to interact. Think of T&S like traffic rules. While no one loves stopping at red lights, they prevent chaos at intersections.

Similarly, T&S establishes strategies, policies, and practices that protect privacy, foster authentic interactions, and minimize harm, whether in social platforms or physical products.

Why Trust & Safety Matters

When T&S is neglected, the consequences can be catastrophic. Platforms risk losing users, facing public backlash, or even legal trouble. Take Zoom’s rise during the early pandemic. Overnight, it became the go-to platform for meetings, classes, and virtual happy hours. But “Zoom-bombing,” uninvited guests crashing meetings with offensive content, sparked backlash. Zoom had to act fast, introducing security updates to regain trust.

When T&S is neglected, the consequences can be catastrophic. Platforms risk losing users, facing public backlash, or even legal trouble.

-Kristen Haddad, Senior UX Researcher

Similarly, poorly designed GPS tracking systems have been misused by bad actors to stalk individuals, eroding user confidence. In response, some companies now provide features like temporary sharing and consent-based tracking to prevent misuse.

Smart home devices present another concern. A video doorbell could let a domestic abuser track someone’s movements or a smart thermostat could be used to manipulate a home's comfort in harmful ways. As explored in Design for Safety by Eva PenzeyMoog, adding safety features like shared permissions and activity logs helps make these products both safe and ethical.

The Nuances of Trust & Safety

T&S isn’t one-size-fits-all. For global platforms, cultural differences must be considered. What’s funny in one culture might be offensive or misunderstood in another. British humor, with its signature deadpan delivery and tongue-in-cheek style, is a prime example. 

Julia Strunk, a Senior UX Researcher at AnswerLab, highlights the crucial role of cultural understanding, "In one instance, a British user made a light-hearted joke with a friend, leading to their content being removed. If they had the chance to explain, ‘It was a joke,’ it could have made all the difference."

Agency comes up a lot in my interviews. Users want a dialogue with decision-makers.

-Julia Strunk, Senior UX Researcher

Creating systems with feedback loops that let users explain their intent or challenge decisions is key to building fairness and trust. Algorithms often miss subtle context, which can lead to problems like over-censoring and pushing users away or allowing harmful content to slip through. 

A study by Molina and Sundar found that allowing users to provide feedback to algorithms enhances trust by increasing user agency. Finding the right balance requires smarter algorithms and culturally aware human oversight.

The Impact of Emerging Technologies

As T&S teams handle current issues, new technologies bring more challenges. Deepfakes, for example, are highly realistic fake videos that blur the line between real and fabricated content, driving the spread of misinformation. 

Generative AI adds further concerns, including questions about ownership of AI-created content, risks of identity theft such as fake voicemails imitating someone’s voice, and biases in the content it generates. As highlighted in Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech by Sara Wachter-Boettcher, these biases and risks are not just technical oversights but systemic issues that can perpetuate harm if left unchecked.

Max Symuleski, AnswerLab’s AI Product Manager explains, "While recommendation engines gradually impact behavior and mental health over time, Generative AI, with its speed of output, presents more immediate risks, especially with body image content targeting young women and men disproportionately." 

“Generative AI will introduce a lot more possibility for risk and harm. What I hope is happening, is that the big companies building these tools are doing better research and being more proactive than they were with recommendation engines."

-Max Symuleski, AI Product Manager

>> Dive deeper: Cracking the Code: Lessons Learned Moderating UX Research for GenAI Products

Meanwhile, emerging platforms like augmented reality (AR) and virtual reality (VR) introduce new challenges around user safety. In VR spaces, harassment is prevalent and often involves the invasion of personal boundaries. A study by S.B., Sabir, and Das highlights that existing safety controls, such as muting and blocking, are often seen as ineffective or hard to use, particularly in crowded environments or when submitting reports.

Ali Smith, a Principal UX Researcher who studied the experiences of women in VR, emphasizes the importance of designing effective safety tools. “There are critical inflection points in a user's journey that must function smoothly to effectively address instances of VR abuse."

Users need to feel empowered and encouraged to use safety tools, otherwise, they may turn to ineffective personal solutions or simply leave the platform entirely.

-Ali Smith, Principal UX Researcher

Smith shared that first and foremost, users need to be aware that safety tools exist at all. "Interactions in VR social spaces happen quickly, often leaving little time for users to locate and access these tools. The tools must be swift and seamless to use—a user-friendly experience is particularly crucial for someone who is understandably flustered."

For many female VR gamers, one negative experience can outweigh countless positive ones, leading to a recurring pattern of disengagement from the platform.

As technology evolves, so do the risks—but so do the solutions. To stay ahead, organizations need to include safety features in their products from the start. AI moderation now does more than just block certain words, it uses natural language processing to analyze tone and intent, helping platforms like YouTube and TikTok distinguish between harmful content and casual conversation. 

Deepfake detection tools, like Microsoft’s, check for inconsistencies in pixels and metadata to spot altered images and videos before they spread. Meta has also added tools like watermarking and authentication to verify AI-generated content.

In VR spaces, platforms like Horizon Worlds use personal boundary settings to prevent harassment in real-time. These technologies proactively reduce harm while keeping users engaged. By integrating ethical safeguards early on, companies can use innovation to build trust, not just mitigate risks.

The Human and Social Dimensions

Content moderators are the unsung heroes of T&S, but their work comes at a cost. Jennifer Olivier, a Principal UX Researcher at AnswerLab, describes them as “online first responders,” facing the internet’s darkest corners daily.

I came to think of [content moderators] as the online first responders. Think of our law enforcement and firefighters, and the trauma they’re experiencing. And then there are the online first responders, exposed to horrific content daily.

-Jennifer Olivier, Principal UX Researcher

Olivier went on to share "Their work is essential to keeping platforms safe, yet they operate in the shadows, facing PTSD symptoms with minimal support. Ultimately, the goal was also to train machines to take on more of the burden and mitigate harm to these individuals."

To protect moderators, companies need to offer strong training, mental health support, and tools to limit their exposure to harmful content. But the harm extends to users too. For example, unchecked hate speech can lead to real-world violence and extremism, making T&S not just a legal requirement but a moral responsibility. 

Beyond hate speech, users can also be harmed through online harassment, the spread of misinformation, and exposure to graphic or disturbing content. The U.S. Surgeon General has highlighted that prolonged exposure to online harm is linked to increased rates of anxiety, depression, and even PTSD, particularly among younger users. This underscores the need for strategies to safeguard both mental health and digital well-being.

Practical Strategies for Product Teams

So, how can product teams ensure their products are safe, trustworthy, and well-received by users? Here are actionable steps:

Build Safety from the Start

  • Proactive Design: Assume your product could be misused and plan for it. Identify risks early by creating scenarios of potential abuse, and add safeguards before launching.
  • Privacy by Design: Make privacy settings simple and easy to use. Use practices like minimizing the data you collect and making settings easy to understand and find, which helps build trust with users.
  • Safety Defaults: Set security features like restricted sharing or strong passwords as the default, since users often don’t activate these features on their own.

Create and Enforce Clear Policies

  • Community Guidelines: Write clear, culturally aware rules so users know what behavior is acceptable and the consequences of breaking them.
  • Transparent Enforcement: Explain how rules are enforced and allow users to appeal decisions or clarify their actions.

Use Advanced AI with Human Oversight

  • Smarter Algorithms: Train AI systems with diverse datasets to better recognize nuance across languages, cultures, and contexts.
  • Human in the Loop: Employ moderators who understand local cultures to handle cases where AI might fail to understand context.

Plan for Global Use

  • Localized Strategies: Work with experts in different regions to understand how your product might be used or misunderstood in various markets.
  • Regional Moderation: Hire moderators familiar with local languages and customs to ensure fair rule enforcement.

Address New Risks with Proactive Features

  • Combat Emerging Challenges: For technologies like AR, VR, and generative AI, add features such as personal boundary settings, watermarking for AI content, and tools to flag harmful imagery like overly edited body filters.
  • Monitor Trends: Regularly analyze user behavior to identify and address new risks before they become widespread.

Listen to Users

  • Continuous Feedback: Create ways for users to report issues and share concerns, and act quickly to show accountability.
  • Usability Testing: Test safety features with real users to ensure they work as intended and are easy to use.

Support Content Moderators

  • Protect Mental Health: Provide moderators with access to mental health resources, including therapists and tools, to reduce exposure to harmful content.
  • Automate Repetitive Tasks: Use AI to handle simpler tasks, freeing moderators to focus on more complex cases.

Real-World Examples

  • Ride-Sharing Apps: Lyft and Uber added safety features to protect riders and drivers, such as emergency buttons, identity verification, audio recording, passcodes, location sharing with friends, and anonymized phone numbers.
  • Wearable Tech: Garmin includes incident detection features that alert preselected contacts if the user is in distress.
  • Social Media Platforms: Pinterest employs a combination of AI-powered tools and human moderation. It also provides mental health resources and search interventions (e.g., showing supportive content when users search for terms related to self-harm).

How AnswerLab Can Help

In our increasingly interconnected and socially conscious world, organizations have a responsibility to prioritize humanity. This responsibility manifests through their Trust and Safety practices, demonstrating their values and commitment to protecting people.

To keep their platforms trustworthy and stay ahead of evolving technology, companies must proactively integrate T&S considerations into their research and design processes. 

At AnswerLab, we help organizations create safer, smarter, and more positive products. Whether it's simplifying privacy settings, building ethical AI, or improving moderation systems, we're here to guide you.

Ready to make your platform a beacon of trust? Let’s start the conversation.

***

Further Learning Opportunities

If you’re looking to expand your knowledge on Trust and Safety, here are valuable resources to explore: