Another Idea for Family Friendly Social Messaging

*Update**: has shut down

Take a look at Mastodon for a good Twitter replacement.

I wanted to show my twitter feed to my son today, but I couldn’t just hand him my phone and let him read it himself. I am a grown up and I follow grown ups. I have a lot of grown ups who use bad language in my feed. That’s fine for all us grown ups, but I don’t want to just hand over my phone to my kids.

Twitter has no facility for being kid-friendly but could create one.

[legal note: The ideas expressed in this blog post are hereby, without limitation, donated to the public domain. All copyright, intellectual property right, and/or ownership rights are hereby explicitly waived. In other words: someone should take this idea and go make a business out of it, and don’t worry about paying me a cent.]

What We Need

To achieve a family-friendly social messaging experience, we need to create some new capabilities. Remember that this entire ecosystem is a subset of the full App.Net experience. And someone opts into this experience voluntarily by adopting these extra restrictions and conventions.

  1. Tagging of accounts. It would be very cool if the platform allowed people/accounts to tag their own profile (using platform supported metadata). I’m thinking 3 levels:
  2. (E) meaning fit for everyone young or old. By adopting this tag, you’d be saying “this account will always post things that are appropriate for everyone.” And you’d be agreeing to some Ts&Cs (described below).
  3. (U) for unclassified. Meaning the person isn’t saying one way or another. This is the default for all new accounts.
  4. (M) for mature (ie grown-up). This is just a self-tagging mechanism so that people can basically say “I’m giving you fair warning. I’m not kid-friendly”.

The idea is to give me an ability to signal to other people the nature of my account. That is, I’m saying “because I’ve elected to rate myself E, I’m telling the world that I will only use language or topics or pictures, etc. that are appropriate for everyone.”

  • Clients that support filtering and allow parents / teachers / schools to control the accounts that can be followed. Parents could say “only E” or “not M”, etc. likewise, there is the classic market for parent/kid friendly clients that look for bad words or links to adult content anyways.
  • An enforcement body to monitor violations of the E rating. When someone marks their account “E” they would accept some terms and conditions. They would agree that if the account uses bad language or links to inappropriate content, they could lose the E rating. This enforcement body would be responsible for writing the code of conduct, and carrying out the enforcement process in a fair way.
  • A school / student / teacher business model. I.e a way for a school to subscribe to the service and get all its kids onto it. That would also require a management interface that lets them enroll, unenroll, monitor accounts, group accounts, etc. A parent interface would also be handy.

Given these things, we can create a kid-friendly social messaging system. Clients will restrict content (to the extent that we can get the kids to only use approved clients). It would be handy if was aware of the content ratings, too, so that if a E-restricted account logged into the web site, they wouldn’t see the U and M content. Of course, the App.Net response to this is to recommend that someone build their own ecosystem on top of App.Net, using App.Net as their plumbing. It would just be nice if Alpha wasn’t the really easy workaround to go see unregulated content.


I figure someone will create an automated moderation system that will catch a fair amount of obvious profanity, abusive content, and potentially adult content. You’d have 3 levels of moderation:

  1. Automated triage: a program would just read every message and look for the really obvious F-words, acronyms (WTF, STFU) or abusive language. I assume such technology exists and isn’t hard to apply. Likewise, it would have to check all URLs to make sure they landed at safe pages.
  2. Semi-automated triage: anything the first level thought was questionable would go into a human quarantine to be figured out by a person. Stuff that is acceptable would be released into the stream.
  3. Complaint resolution. If someone manages to get something past all the filters, there would have to be a ‘flag this post’ or other complaint mechanism to push a post into the human triage queue.

Perhaps this moderation can be implemented through Amazon’s Mechanical Turk?

Example Usage Ideas

School Social Network

A school gets a “school” account and gives out social media accounts that are affiliated with the school. Teachers, administrators, students, and parents can all use it. By default, everyone must be rated E. Content is monitored and moderated (as above). Kids (if they’re older and have smart phones) can get mobile apps that hook into it and keep them connected to the network and let them interact with their teachers, parents, etc. Now, I expect that kids will still use services like BBN, iMessage and such to interact with their peers so that parents don’t snoop. But there’s a fair chance that lively discussion and interaction could happen using the school-oriented network. It lets teachers maintain two identities (their personal and their teacher persona) while still using modern social media. I’m not sure, but there’s a reasonable chance that you could let students follow other App.Net users who are also marked (E), even if those people are not at the school. Students can give out their school App.Net ID to others, and the content that comes in to them will be filtered and monitored.

Tagging Individual Posts

It would be cool if, even though I’m rated U or M, I could make a post that was individually tagged E. I mean, I’m a grown up, but I have kids. Some of my colleagues have kids. I might post something like “this museum has an awesome exhibit” and I want it to show up in the kid-friendly clients. I tag the post E. Maybe that goes straight to human moderation. Maybe it doesn’t. It might be handy, though.

Kid-Friendly App.Net Clients

There’s no reason the whole thing has to be done by schools or oriented around schools. For someone who wants to create a kid-friendly chat service, they can develop an client that puts in the parental controls and the rating metadata. The biggest roadblock I see for this is the (at the time of this writing) $5/month cost per child. But maybe the developer of the app can buy bulk accounts from and gets a discount he can pass on to the users. I dunno.


This is an area where App.Net’s infrastructure is brilliantly suited to build something that simply cannot be built on top of another social infrastructure. An entrepreneurial developer could cook this up in a matter of weeks and would be completely encouraged by the nature of the platform.

As I think about some of the applications to school scenarios, I wonder if there is public funding from various governments that might help kickstart such a system.