New Features - Multi-Option, Discussion, Question Edit and Battle!

We’ve been hard at work developing new functionality and today we’re releasing the following:

1.  Multi-Option Questions

It’s now possible to define the answers, instead of just ‘Yes’ or ‘No’.  This has been the single most requested feature, so we’re very happy to deploy it.  Currently there is a limitation of 4 answer options, although this will be increased as we refine the analytics so that results can be meaningfully displayed.

2.  Discussion

Other than just voting, our users have told us they want to be able to engage in discussion.  So, we’ve implemented Disqus on the question page.

3.  Question Editing

By popular demand, it’s now possible to edit a question after you’ve asked it.  The purpose of this functionality is to fix basic errors and typos.  A log of question changes is maintained for auditing to protect the integrity of Yacket.

4.  Yacket Battle

An exciting feature has been released today.  It’s called ‘Yacket Battle’.  This allows questions to be asked where the answer options are 2 twitter users.  For example, a user might ask the question “Who do you want to win the 2012 US Election?”, and the options could be @BarackObama and @MittRomney.  Apart from making your questions more visually interesting, this has the affect of creating relationships with questions asked by other Yacket users about the same people.  The easiest way to understand it is to see it in action.  Click here to see.  It works with any Twitter user - they don’t need to have an account in Yacket to be a part of the battle.

We intend on releasing new functionality over the next 2 weeks, and I’ll update you as it happens.  In the meantime, I’d love for you to try out the new features and help us to continue our Public Beta experiment as we work towards moving Yacket out of beta towards a wider public launch.

I’d like to take this opportunity to thank everyone who has contributed to this public beta so far - the feedback and data we are gathering is really helping us shape Yacket into a great product.

Any questions, comments or problems, please email me directly on aaron@yacket.com. Click here to go to Yacket.com.

Aaron Smith

Founder

The Public Beta - 48 Hours, 1 Mistake, 7 Measurements

A strong sense of community exists in the startup space and I’ve been fortunate to benefit from its knowledge and experience.  In search of greater involvement in the community, I started spending ‘Yacket’ time at York Butter Factory, a co-working space in Melbourne designed for startups where the culture is one of collaboration and tough love.  I experienced the sense of collaboration immediately but maybe I’m still too new for the tough love.

In this spirit of sharing, I’ll endeavour to use this blog as a means of documenting the successes, failures and challenges we experience on this journey. To begin, I wanted to share the 48 hours since Yacket was made available for public access with a particular focus on some of the metrics. 

Quick Recap

The goal of our pubic beta was to attract a willing group of people and ask them to use our site.  We’d then record the data and test our assumptions.  I’m a big fan of the Lean Startup Approach popularised by Eric Ries @ericries where the overriding strategy is to develop an MVP (minimum viable product) and get it into users’ hands as soon as possible to validate ideas and test assumptions.  The overriding process being - Build > Measure > Learn and then iterate and repeat.

Our selected mechanism for opening our Public Beta and reaching our initial test users was to announce via www.Betali.st.  It’s a great site that features selected startups pre-launch.  On our landing page, we asked people to submit their email address if they were interested in being involved in the Public Beta.   This comprised the entirety of our media activity for our Public Beta - no media release, no PR. 

I reasoned that I wanted to get some good data and feedback, and then iterate the offering before seeking further feedback from a wider group of users.  I thought that Betali’st sourced users would likely be more forgiving of the rough edges and missing functionality in our product, and be more willing than the average user to offer feedback.  This turned out to be true -  I have emails with feedback that could be described as essays, and I’m extremely grateful for this.

I also wanted to discover our mistakes with a user base which a) wasn’t too large and b) was willing to engage and point out mistakes / incorrect assumptions.  It was also in my mind that something could go horribly wrong.  It’s a delicate balance between having enough users to find the problems, but not so many that you risk trashing your reputation in the instance that you really mess something up.  

The Measurements

Measuring is a key aspect of the Lean Startup Approach.  The difficulty is knowing what to measure and, where applicable, how to make a judgement on whether the metrics are good.  At the very minimum, it’s important to create a baseline for future comparisons.  It turns out that I’d not considered a key metric for Yacket - more on that mistake later.  My metrics for this initial experiment were as follows. 

1.  How many users showed interest
2.  How many of those went on to actually sign-up when invited
3.  How many asked a question
4.  How many users answered a question
5.  What percentage of registered users were active
6.  What percentage of users were willing to contribute information about themselves
7.  Was there any network effect in attracting other users? 

1.  How many users showed interest?

Between the 20th of September and the 1st of October, 209 people completed form the linked from www.betali.st.  We don’t know how this compares to others featured on Betali.st - maybe founder @marckohlbrugge will publish the data he collects on this in the future.  

2.   How many of those went on to actually signup when invited?

At 6.45pm on Monday 1st of October, I sent 209 email invitations announcing that those people who registered could signup.  Of those 209 emails, 5 bounced and 114 were opened. Of those, 61 went on to create an account in Yacket.  

3.  How many asked a question?

48 users asked at least 1 question, which represents about 30% of users.  15% of the total users created 80% of the questions.  This mirrors roughly the twitter experience where 10% of users are responsible for 90% of tweets.   My sense before doing the public beta was that a smaller % would be responsible for a greater proportion of the questions.  However, I expect that given the users who signed up are likely more willing than most to get involved, this would explain a slightly inflated participation rate, and in any case, this is a concentrated episode.

4.  How many users answered a question?

108 users answered at least 1 question.  On average, users answered 10.1 questions each, contributing a total of 1091 votes in 48 hours.  In total, Yacket recorded 1324 votes on user questions.  The difference between the 1324 and 1091 of 233 is explained by votes received by users not registered - i.e., they clicked on a link in twitter and voted in the same way people do in traditional web polls.  

5.  What percentage of registered users were active?

We define an active user as one who in the 48 hour period either asked or answered a question.  Of the 151 initial users, 108 met the definition of active - or 71%. This was lower than I’d expected.  I thought that given a user had completed the signup process, they would have been likely to participate in some capacity given their willingness to that point.  I’m investigating this.  It may be that some people wanted to reserve their twitter username in Yacket, but did want to participate further yet.

6.  What percentage of users were willing to contribute information about themselves (e.g. gender, date of birth, education etc)?

Of the active users, 66% have already contributed information about themselves, and on average, each active user has contributed 7 pieces of data.  Prior to the launch, this was one of the greatest areas of unknown for yacket.  Our conversations with potential users prior to the public beta revealed that most were reluctant to share the type of information Yacket asks for.  This data has been a surprising and encouraging endorsement of our assumption that people would be willing to contribute information about themselves in order to receive more information back - instantly.  We did expect that users would be slower to contribute this information than they have been.  

7.  Was there any network effect in attracting other users?  - i.e. did friends / connections of the initial users sign up too?

One of the reasons for having the public beta with an effective control group was to be able to measure any network effect.  The only way that other people could learn about Yacket in this period was a result of the initial users asking questions, which were then tweeted from their Twitter accounts.  The data shows that an additional 90 people joined.  So, the initial 61 people turned into 151 - meaning basically each user effectively brought in 1.5 new users in the first 48 hours. Although encouraging, the sample is too small and probably unrepresentative in some ways.  We’ll monitor and report back in a future post.  We have lots of work to do here.

What did we forget to plan to measure?

For the most part, our assumptions were largely correct in our control group.  However, there was one area where I really dropped the ball.  That is, the question of clicks on Twitter links.  When a user asks a question in Yacket, it is automatically tweeted from that user’s Twitter account (unless the user turns it off and only 5 people did).  It is this distribution that comprises a key part of how Yacket questions are distributed.  Prior to launch it didn’t occur to me that I should include this in our metrics.  Stupid of me.  

Fortunately, I could still measure it, and the data was pretty ugly.  On average, each question received 3.2 clicks from twitter.  The kicker here is that the data shows that the number of clicks doesn’t correlate to the number of followers a user has.  So, a Yacket user who has 3,000 followers in twitter was receiving the same number of clicks as a user who has 200 followers.  

I recently read ‘Grouped’ by Paul Adams http://www.thinkoutsidein.com/blog/ - in this book Paul’s research (mainly with Facebook) showed that 80% our communication both online and offline is with the same 4 or 5 people, irrespective of the number of friends or followers we have.  Perhaps our experience with Twitter links is variation on that theme.

We have ideas for improving this, and an A/B type experiment we ran showed a significant improvement (where A resulted in 0 click throughs and B resulted in 9).  I’ve had great ideas from some advisors on this, so I’ll save the detail of this for a later post after we’ve run some more experiments and obtained more data.  

What’s Next?

We are now moving into a cycle of fast iterations based on feedback, the first of which will be released tomorrow and then as regular as possible thereafter. We have a number of big features not yet released and waiting in the wings, so the initial feedback has given us the confidence to prioritise those for the next round of the yacket public beta.

Thanks so much to everybody who has contributed and used yacket in the public beta.  If you haven’t already, please check out yacket and give us your feedback.

Aaron Smith

Founder
@aaronsmith333

Yacket Public Beta is Live!

3, 2, 1 … launch.  Yacket is available for the world to see.  We look forward to bringing you further updates via this blog shortly.