Effective April 25, 2026
How we keep boopr a good place to be.
These guidelines are the rules that apply to everyone on boopr, alongside our Terms of Service and Privacy Policy. They exist to keep boopr a place worth inviting someone into.
Who boopr is for:
boopr is for users 13 and over, in compliance with COPPA. Users between 13 and 17 confirm that a parent or guardian has reviewed these guidelines with them. boopr does not collect a date of birth at registration; you are responsible for representing that you meet the age requirement when you create your account. If we discover that an account belongs to someone under 13, we terminate it and, upon confirmation, delete the data within 30 days. Full age policy lives in the Terms of Service.
A note on how boopr works:
Everything you share on boopr (posts, photos, your profile) is encrypted on your device before it ever reaches our servers. We cannot see your content, and we do not scan it. Moderation here has to work a little differently than on Instagram, X, or Reddit, where the company reads what you post.
We rely on you to report violations and describe what happened, your community to hold each other accountable, and our invite system to keep bad actors out. When you report, we receive only metadata and your description; no encrypted content is decrypted or sent to our servers.
Be a good friend. Don't use boopr to hurt, harass, exploit, or deceive people. Don't share content that's illegal or harmful. If someone is making boopr worse, report them. If you get reported, we'll look into it fairly.
That's the spirit of it. The rest of this page fills in the specifics.
Do not create, share, or store the following on boopr:
The following behaviors are not allowed on boopr:
boopr is invite-only by design. Every person on the platform was invited by someone who vouched for them. This creates a chain of accountability that most platforms don't have.
When you invite someone, you're vouching for them. If someone you invite seriously or repeatedly violates these guidelines, we may consider that context when reviewing your account.
This doesn't mean you're liable for everything your invitees do. It means we expect you to invite people you actually know and trust, not strangers from the internet.
Invite accountability in practice:
We look at patterns, not isolated incidents. If one person you invited gets reported for a minor issue, that's normal. If multiple people you invited are involved in serious violations, or if you're distributing invites in bulk to people you don't know, that's a different situation.
Because your posts, photos, and profile are encrypted on your device before they reach us, we can't see your content. Reporting is how you let us know something is wrong. When you report a user or content, you'll be asked to describe the issue and select a reason. Your description is what we use to assess the situation. No encrypted content is automatically decrypted or sent to our servers.
boopr provides in-app reporting on posts, profiles, and boops. When you report, you'll select a reason (harassment, spam, inappropriate content, impersonation, CSAM, NCII, or other) and write a description of what happened (up to 1,000 characters). Your description is required and is the primary information we use to assess the situation.
We record your user ID, the reported user's ID, the content ID, the reason you selected, your description, and the time of the report. No encrypted content is decrypted or sent to our servers. We make enforcement decisions based on your description, report patterns across multiple users, and account-level metadata.
You can also email [email protected] with details about the issue. For CSAM or child exploitation, email [email protected] with the subject line "CSAM Report" and report to the NCMEC CyberTipline.
A member of the boopr team reviews your report. We may reach out to you or the reported user for more context. We aim to act on reports within 48 hours, though complex cases may take longer. You won't always hear back with details about what action was taken because we balance transparency with the reported user's privacy.
Protecting reporters:
We do not reveal who filed a report to the person being reported. Retaliating against someone for filing a report is itself a violation of these guidelines.
You don't have to wait for us to act. boopr gives you tools to protect yourself:
These tools work instantly and don't require a report. Use them freely.
When we determine that a violation has occurred, we take action proportional to its severity. We consider the nature of the violation, whether it's a first offense or a pattern, the impact on other users, and any relevant context.
For first-time or minor violations, we may issue a warning explaining what was wrong and what we expect going forward. Warnings are documented on your account.
For repeated or more serious violations, we may temporarily restrict your account. This could include limiting your ability to post or invite new users for a set period.
For severe violations, repeated offenses after warnings, or any involvement with CSAM or NCII, we permanently terminate your account. Banned users may not create new accounts.
What we can and cannot do:
Because your content is encrypted with keys only you and your friends hold, we cannot proactively scan for violations. We rely on your reports and descriptions to understand what happened. We make enforcement decisions using report descriptions, patterns across multiple reporters, account trust signals (such as the number of distinct reporters, the enforcement history of accounts in your invite chain, your account age, and your recent activity rates), and behavioral metadata. We can take action at the account level: warning you, restricting features, or permanently banning the account. We never access or decrypt your content as part of this process.
If you believe an enforcement action against your account was made in error, you can appeal.
Email [email protected] with your username and a clear explanation of why you believe the decision was wrong. Include any relevant context or evidence.
Account terminations for CSAM, child exploitation, or non-consensual intimate imagery are not eligible for appeal.
Moderation on an encrypted platform works differently than what you're used to on Instagram, X, or Reddit. Those platforms can read what you post and run automated systems to scan for violations. We can't, and we don't want to build the infrastructure that would let us.
The model leans on four things that don't require us to read your content. The invite-only signup cuts off most drive-by abuse, spam, and bot activity before it starts, since every user was brought in by someone who already had skin in the game. In-app tools let you block (currently enforced on your device), remove friends (which rotates your profile key so they can't decrypt anything you post afterward), and pick the audience for each post.
When something does go wrong, reports are how we learn about it. A human on our team reads every one. We base decisions on what you describe, patterns across multiple reporters, and account-level signals (such as the number of distinct reporters, the enforcement history of accounts in your invite chain, your account age, and recent activity rates) — none of which require decrypting content. Enforcement outcomes are also recorded against the inviter's chain: people who repeatedly invite bad actors may see their account flagged for review. The invite system is the feedback loop.
This approach has a real cost: proactive content scanning and automated takedowns aren't options for us, so we won't catch things a scanning system would. That's the trade we made when we decided we couldn't read your posts, and we think an invite-only community with human moderators is the right side of it.
We may update these guidelines as boopr grows and as we learn from how the community evolves. When we make changes, we'll update the effective date at the top of this page and notify users through the app. Continued use of boopr after changes take effect constitutes acceptance of the updated guidelines.
Questions about these guidelines? Get in touch: