Imagine a world where the technology that makes us happiest is the most successful. Where products are rewarded for long-term improvements to mental health, wellbeing and community development. Where products which may seem appealing to users but which actually have an overall negative impact on mental health, wellbeing and community are downgraded, or become less financially sustainable.
All sounds great in principle. But how would it work?
Here’s a starter for ten…
Products would self-assign a set of moral objectives
Individual user experience is assessed based on whether or not these objectives are achieved
Only when the objectives are met is the product able to commercialise the user
Otherwise the product either loses the user or is forced to iterate an individual experience until the moral objective is met and the user can be commercialised.
Here’s an example.
Recent events (take the tragic story of Molly Russell from the start of the year) have shown that social networks have a moral responsibility to support and protect the self-esteem of young people. Given Instagram’s demographic and the content that it shares, it would make sense that one of it’s moral objectives is to accept this responsibility.
The individual experience of teenagers using Instagram and it’s impact on self-esteem would be assessed by:
The product itself using the insights that they already have (which indeed are used to keep us hooked) to assess the vulnerability of the user
AI to assess in real time the emotional state and response of the user to the site (see facial recognition or emotion mapping technologies for the latest on this)
Plus user feedback which privately captures their real sentiment (The ‘Real’ Like Button)
Based on this (obviously seamless, realtime, non-invasive ;)) assessment, if Instagram is boosting self-esteem for a user, then they are effectively ‘cleared’ for commercialisation. Advertisers would pay a higher CPM knowing that this user is engaged in a sustainable way (and with more information on their emotional state). In a non-ad funded business, users could be pushed for payment (think tipping for a service that makes you feel good).
If Instagram is not leading to a net positive experience for the users, they the user would be told as much. Given this information, they may be less likely to continue using this service. instagram would not be able to push advertising and so instead would be forced to iterate the personal experience for this user to a point at which it made the user happier (think changing the content that is shown to them, the way notifications are delivered, limiting the endless scroll etc). This iteration would continue until the point that they were able to commercialise the user, and reviews would be ongoing to make sure that the level of service was maintained.
Of course, all product development comes at a compromise. For example, Instagram may have to restrict content to improve self-esteem amongst teenagers, which could lead to a reduction in the sense of creativity or entertainment that those users receive from the platform. In this instance, it means the service also trends towards a more curated set of content which only makes us feel good - clearly not something that would work for a site who’s moral objective was the dissemination of unbiased news and information.
The devil is always in the detail - and this model is host to a million gremlins. However, it’s an idea. And a clear example of how businesses can shift to be rewarded based on something other than attention.
It’s time to imagine a new future.
It’s time to find our phone/life balance.
#spacephonelifebalance #noexcusesfacebook #bigideas #thereallikebutton #likebutton #ethicaltech #digitalwellbeing #spaceapp #thetruthbutton #facebook #centreforhumanetechnology #newbusinessmodels #attention #appsforhappiness #screentime